FRICO 2023 - The 26th Workshop on Future Research in Combinatorial Optimization
August 14 - 18 2023, Eindhoven, The Netherlands

The 26th Workshop on Future Research in Combinatorial Optimization will take place from August 14 to August 18 2023 at Eindhoven University of Technology, The Netherlands.

FRICO is directed at PhD students from the following research areas:

Every participant will give a ~30 minute presentation about their research. It is especially encouraged to present ongoing work and open problems.

FRICO does not have a conference fee. This is made possible by our sponsors:

On Wednesday 16th, the 'industry day', they will explain how they apply combinatorial optimization in practice.

To get a general impression of FRICO, visit the webpage of FRICO 2022, which took place in Bonn, or have a look at the history of FRICO. If you have any questions, do not hesitate to contact us at (this mailbox will be closed after the workshop, please contact the organizers individually).

Main organizers


The organization in particular thanks Marianne de Bruin, Christopher Hojny and Frits Spieksma for their support and assistance in organizing this workshop.

FRICO 2023 has concluded!

We very much thank all participants for their participation.

The best talk award at FRICO 2023 goes to Torben Schürenberg from University of Bremen for his talk "Hunting an invisible rabbit on an infinite graph in finite time". Congratulations!

We are happy to announce that FRICO 2024 will take place at the Otto von Guericke University Magdeburg. For more information, please consult their website!



Please see the detailed schedule with speakers in each session and abstracts below.

Monday Tuesday Wednesday Thursday Friday
9:00 Registration and opening
9:30 3 talks 2 talks 3 talks 3 talks
10:00 2 talks
10:30 Coffee break
11:00 Coffee break Coffee break 3 talks Coffee break Coffee break
11:30 3 talks 3 talks 3 talks 2 talks
12:30 Lunch Closing and awards
13:00 Lunch Lunch Lunch
13:30 Lunch
14:00 Industry talk Activity and dinner
14:30 3 talks 3 talks
15:00 Coffee break
15:30 Industry talk
16:00 Coffee break Coffee break
16:30 3 talks 3 talks Coffee break
17:00 Industry talk
18:00 Break
18:30 Dinner and pub quiz Dinner and game night1 Travel to dinner
19:00 onw. Conference dinner

1Participants are invited to bring their own (board) games to play! Some board games and playing cards are available at the venue.

Session schedule and speakers

Click on the title or speaker to view the abstract of the talk. Click on the day to expand or collapse.

9:00 Registration and opening
In recent years, significant work has been dedicated to addressing a longstanding conjecture concerning IPs (Integer Programs) involving $\Delta$-modular matrices. A matrix of size $n \times m$ is called $\Delta$-modular if the determinant of every $n \times n$ submatrix falls within the range $\{-\Delta, \dots, \Delta\}$. The conjecture states that IPs with a constraint matrices of this particular form can be solved in polynomial time. Interestingly, for $\Delta$ at least $2$, the efficient determination of whether a given matrix is $\Delta$-modular remains unknown. I will present current work on recognizing bimodular matrices, which represents the special case when $\Delta$ equals $2$.
Let $D = (V, \overrightarrow{E_1} \cup E_2)$ be a planar mixed graph where $\overrightarrow{E_1}$ is acyclic (no directed cycle) and there is no directed path between the ends of an unoriented edge. Moreover, no cycle in $E_1$ separates $E_2$. We work on the acyclic orientation conjecture which states that there exists an orientation $\overrightarrow{E_2}$ such that both $\overrightarrow{E_1} \cup \overrightarrow{E_2}$ and $\overrightarrow{E_1} \cup \overleftarrow{E_2}$ are acyclic. This was introduced by [1] in relation to the well-known Woodall's conjecture [2]. We prove the conjecture for when $E_1$ is a Hamilton cycle. Let $G(D) = (E_2, A)$ be a graph where $e,f \in E_2$ are adjacent if and only if there exists an "almost directed" cycle $C$ in $D$ such that $C \cap E_2 = \{e, f\}$. We also prove the conjecture for when $G$ has at most two connected components. This is a joint work with Ahmad Abdi.

[1] M. Chudnovsky, K. Edwards, R. Kim, A. Scott, and P. Seymour. Disjoint dijoins. Journal of Combinatorial Theory, Series B, 120:18–35, 2016.
[2] D. Woodall. Menger and König systems. In Y. Alavi and D. Lick, editors, Theory and Applications of Graphs., volume 642 of Lecture Notes in Mathematics. Springer, Berlin, Heidelberg, 1978.
11:00 Coffee break
An independent set in a graph is a collection of vertices that are not adjacent to each other. The cardinality of the largest independent set in $G$ is represented by $\alpha(G)$. The independence polynomial of a graph $G = (V, E)$ was introduced by Gutman and Harary in 1983 and is defined as \[ I(G;x) = \sum_{k=0}^{\alpha(G)}{s_k}x^{k}={s_0}+{s_1}x+{s_2}x^{2}+...+{s_{\alpha(G)}}x^{\alpha(G)}, \] where $s_k$ represents the number of independent sets in $G$ of size $k$. The conjecture made by Alavi, Malde, Schwenk, and Erdős in 1987 stated that the independence polynomials of trees are unimodal, and many researchers believed that this conjecture could be strengthened up to its corresponding log-concave version. However, in our research, we present evidence that contradicts this assumption through the introduction of infinite families of trees whose independence polynomials are not log-concave.
We investigate the parameterized complexity of the task of counting and detecting occurrences of small patterns in unit disk graphs: Given an $n$-vertex unit disk graph with an embedding of ply $p$ (that is, the graph is represented as intersection graph with disks of unit size, and each point is contained in at most $p$ disks) and a $k$-vertex unit disk graph $P$, count the number of (induced) copies of $P$ in $G$. For general patterns $P$, we give an $2^{O(pk/ \log k)} n^{O(1)}$ time algorithm for counting pattern occurrences. We show this is tight, even for ply $p = 2$ and $k = n$: any $2^{o(n/ \log n)} n^{O(1)}$ time algorithm violates the Exponential Time Hypothesis (ETH).

Semidefinite programming (SDP) is a powerful generalization of Linear Programming (LP). Interior Point Methods (IPM) are the main methods to solve SDPs: Take a point in the interior of set of solutions and track more "accurate solutions" along a central path. But problems may arise when the interior of the feasible region is empty (weakly feasible problems [2]).

In [1], the authors describe a symbolic algorithm for solving SDPs in Linear Matrix inequality representation. Using this approach, it is possible to solve degenerate instances of SDPs under certain genericity conditions in the problem data. Our goal is to study and exploit the structures of (weakly feasible) SDP problems to find representations of exact solutions to such problems by combining numerical algorithms and symbolic computation.

[1] Henrion, D., Naldi, S. and El Din, M.S., 2016. Exact algorithms for linear matrix inequalities. SIAM Journal on Optimization, 26(4), pp.2512-2539.
[2] Pataki, G., 2017. Bad semidefinite programs: they all look the same. SIAM Journal on Optimization, 27(1), pp.146-172.

13:00 Lunch
The problem of finding the $n$th smallest number in a collection is known as the selection problem. But what if the collection of numbers is not directly accessible to us? This is what explorable heap selection is about. It is the problem of selecting the nth smallest value in a binary heap, where the values can only be accessed by traversing the binary tree. In this setting we want to minimize the distance traveled in the tree. The problem was originally proposed by Karp, Saks and Widgerson (FOCS '86), who also proposed algorithms for this problem. Recently we found a new algorithm, that significantly improves on the running time. In this talk I will explain the problem, our new algorithm and the connection between the problem and the branch-and-bound algorithm for solving integer programs.
Standard mixed-integer programming formulations for the stable set problem on $n$-node graphs require $n$ integer variables. We prove that this is almost optimal: We give an explicit family of stable set polytopes of $n$-node graphs for which every polynomial-size formulation requires $\Omega(n / \log^2 n)$ integer variables. By a polyhedral reduction we obtain an analogous result for $n$-item knapsack polytopes. This improves the previously known bounds of $\Omega(\sqrt n / \log n)$ by Cevallos, Weltge & Zenklusen (SODA 2018). To this end, we extend a result of Göös, Jain & Watson (FOCS 2016, SIAM J. Comput. 2018) showing that the linear extension complexity of stable set polytopes of some $n$-node graphs is $2^{\Omega(n / \log n)}$. We show that the same bound holds for $\varepsilon/n$-approximations of these polytopes, for some constant $\varepsilon > 0$. On this way, we simplify several of their original arguments to obtain a proof that is accessible to a broader community. For instance, our proof does not require reductions to certain CSP problems or the notion of Karchmer-Wigderson games. This is joint work with Makrand Sinha and Stefan Weltge.
The $\epsilon$-constraint method is a widely used scalarization technique that transforms a multiobjective optimization problem into a scalar-valued optimization problem to obtain an efficient solution or nondominated image. We derive a recursive algorithm to compute $Y_N$, the set of all nondominated images, for discrete multiobjective optimization problems that exploits the structure of the parameter space associated with the $\epsilon$-constraint method: We define a predecessor/successor relation between nondominated images and an initial set of nondominated images such that calculating the initial set and the predecessors of a nondominated image is equivalent to computing the nondominated images of a multiobjective optimization problem with one less objective function. Furthermore, each nondominated image is connected to the initial set by the defined relation. Thus, the algorithm is able to compute $Y_N$. We illustrate the algorithm for triobjective optimization problems and show that in this case at most $2|Y_N| + 1$ single objective optimization problems need to be solved.
16:00 Coffee break
In cooperative game theory, a core allocation guarantees that no subset of players would prefer to deviate. However, the allocation in the core requires distributing the exact cost of the grand coalition to all individual players. This thesis studies the optimization problem to maximize the total amount that can be distributed over the individual players while maintaining the stability with respect to proper coalitions. The corresponding solution concept is referred to as "almost core". It shows how much one could maximally tax the total cost of the grand coalition, without any proper coalition wanting to deviate. Some complexity theoretic results of almost core allocations are derived. We study almost core allocations to a class of games with non-empty core: the minimum cost spanning tree games. A tight $2$-approximation algorithm to compute an almost core optimum is proposed. The algorithms to compute the almost core optimum on some constrained minimum cost spanning tree structures are also given. In addition, the almost core concept is studied for value games. It turns out that the same algorithmic results can be derived also for value games.
In this talk we will look at a discrete hidden movement game where a hunter tries to catch an invisible rabbit on an infinite graph in finite time. The rabbit is located on the nodes of the graph and after each shot of the hunter the rabbit scampers startled into a neighboring node. In the setting, we will look at the question of how a graph must be constructed so that the hunter can safely catch the rabbit in finite time regardless of the unknown starting position and route of the rabbit. In particular, we will find a characterization of all graphs with possibly infinitely many nodes where the hunter has such a winning strategy.
We discuss the NP-hard Minimum Capacity-Preserving Subgraph (MCPS) problem: Given a directed graph with edge capacities and a retention ratio $\alpha \in (0,1)$, find the smallest subgraph that, for each pair of vertices $(u,v)$, preserves at least a fraction $\alpha$ of a maximum $u$-$v$-flow's value. This problem is motivated by research concerning the reduction of power consumption in a computer network: it models turning off as many links as possible while retaining the ability to transmit at least $\alpha$ times the traffic compared to the original network. Previous work shows how to solve MCPS with uniform edge capacities on laminar series-parallel graphs (LSPs), a generalization of directed series-parallel graphs that also includes cyclic and very dense graphs. We extend this work to MCPS with non-uniform edge capacities on LSPs. Moreover, we present current research-in-progress on the (in)approximability of MCPS.
18:30 Dinner and pub quiz
Network creation games are models used to study the formation and evolution of e.g. social networks. In these games, players strategically decide how to connect with others in order to maximize their own utility. One variant of such games is the $2$-neighbourhood network creation game, where players aim to have as many other players within their $2$-neighbourhood as possible. Studying the formation and shape of Nash equilibria we reveal quite strong restrictions on the structure.
Consider words of length $n$. The set of all periods of a word of length $n$ is a subset of $\{0,1,2,\dots,n−1\}$. However, any subset of $\{0,1,2,\dots,n−1\}$ is not necessarily a valid set of periods. In a seminal paper in 1981, Guibas and Odlyzko have proposed to encode the set of periods of a word into an $n$ long binary string, called an autocorrelation, where a one at position $i$ denotes a period of $i$. They considered the question of recognizing a valid period set, and also studied the number of valid period sets for length $n$, denoted $\kappa_n$. They conjectured that $\ln(\kappa_n)$ asymptotically converges to a constant times $\ln^2(n)$. If improved lower bounds for $\ln(\kappa_n)/\ln^2(n)$ were proposed in 2001, the question of a tight upper bound has remained opened since Guibas and Odlyzko's paper. Here, we exhibit an upper bound for this fraction, which implies its convergence and closes this long standing conjecture. Moreover, we extend our result to find similar bounds for the number of correlations: a generalization of autocorrelations which encodes the overlaps between two strings.
We investigate whether the proof system of linear programming based branch-and-bound using general disjunctions (more recently also known as Stabbing Planes (SP)) admits quasi-feasible monotone real interpolation. This would allow us to derive the first conditionless exponential (in the encoding length of the integer program) lowerbound for the size of a branch-and-bound tree for a particular integer linear program from exponential lower-bounds for monotone real circuits separating a pair of languages known as Broken-Mosquito-Screen. To this end, we show that for every branch-and-bound tree showing the integer-freeness of a product $P \times Q$ of two polyhedra, there either exists a closely related branch-and-bound tree for showing the integer-freeness of $P$ or a closely related tree showing the integer-freeness of $Q$. Moreover, we show that monotone real circuits can perform binary search efficiently. This is work in progress.
11:00 Coffee break
In a Stackelberg Network Pricing game, one Player called the Leader controls the prices/weights of some entities (vertices or edges) of the underlying graph. The remaining weights are constant and known to the Leader. The other players, called followers, each have a subgraph on which they independently solve a (combinatorial) optimization problem given the prices defined by the Leader. The problem then asks for a pricing scheme that maximizes the Leader's revenue which corresponds to the prices of the entities contained in any of the followers' solutions. I consider the problem Stackelberg Vertex Cover (StackVC) with a single follower, i.e., the Leader controls the prices of some vertices, and the follower aims to find a vertex cover of the graph with minimum weight. Previous research has shown that StackVC is NP-hard on general bipartite graphs. To find the complexity boundary of the problem, I investigated paths — arguably the simplest types of a bipartite graphs. I propose a dynamic program with linear complexity to determine optimal prices from the Leader's perspective. With minor adaptions, this algorithm also extends to trees and forests. While the exact complexity boundary remains unknown, this remains one of very few positive results in the Stackelberg setting.
For any finite set $H=\{H_1,\dots,H_p\}$ of graphs, a graph is $H$-subgraph-free if it does not contain any of $H_1,\dots,H_p$ as a subgraph. In recent work, meta-classifications have been studied: these show that if graph problems satisfy certain prescribed conditions, their complexity is determined on classes of $H$-subgraph-free graphs. We continue this work and focus on problems that have polynomial-time solutions on classes that have bounded treewidth or maximum degree at most $3$ and examine their complexity on $H$-subgraph-free graph classes where $H$ is a connected graph. With this approach, we obtain comprehensive classifications for (Independent) Feedback Vertex Set, Connected Vertex Cover, Colouring and Matching Cut. This resolves a number of open problems. We highlight that, to establish that Independent Feedback Vertex Set belongs to this collection of problems, we first show that it can be solved in polynomial time on graphs of maximum degree $3$. We demonstrate that, with the exception of the complete graph on four vertices, each graph in this class has a minimum size feedback vertex set that is also an independent set.
We devise a pivot rule that ensures that the simplex algorithm leaves any (degenerate) vertex after linearly many pivots if an improving feasible direction for the vertex is known in advance. Combining the latter with a result on the number of vertices visited by the simplex algorithm, yields a new bound on the total number of simplex pivots leading to an optimal vertex.
13:00 Lunch

We study generalizations of the Partition Set Cover Problem from [IV18], in which one is given a ground set $E$, a weighted collection $S$ of subsets of $E$, and a second collection $U$ of disjoint subsets of $E$. Additionally, each element of $U$ comes with a demand that is at most as large as its cardinality. The objective is to choose a minimum-weight subcollection of $S$ such that, for each subset $C$ in $U$, it holds that the number of covered elements from $C$ is at least as large as its demand.

The motivation for studying the problem stems from robust optimization. Essentially, one needs to cover a ground set of elements but whether an element is present or not depends on the particular scenario. We think of $U$ as a discrete uncertainty set and thus call the elements of $U$ scenarios. In our generalization, which is a partial version of the above problem, the scenarios are not necessarily disjoint, and we are additionally given an integer number $l > 0$ which specifies how many scenarios need to be covered.

For the case where the demand of each scenario is given by its cardinality, we consider two LP-based bi-criteria approximation algorithms and investigate different ways of transforming the obtained solutions into a feasible ones. This results in several approximation algorithms whose ratio depends on the number of subsets to be covered or on the size of $U$.

[IV18] Tanmay Inamdar and Kasturi Varadarajan. On the partition set cover problem. arXiv preprint arXiv:1809.06506, 2018.

An important objective in scheduling literature is to minimize the sum of weighted flow times. We are given a set of jobs where each job is characterized by a release time, a processing time, and a weight. Our goal is to find a preemptive schedule on a single machine that minimizes the sum of the weighted flow times of the jobs, where the flow time of a job is the time between its completion time and its release time. Lars Rohwedder, Andreas Wiese and me developed the first PTAS for this Problem using a previously known reduction to a geometric covering problem, while a QPTAS was known for about 20 years. Currently we are working on a more natural interpretation of this algorithm and on generalizations for this setting using different cost functions or more than one machine.
We study a new graph separation problem called Multiway Near-Separator. Given an undirected graph $G$, integer $k$, and terminal set $T \subseteq V(G)$, it asks whether there is a vertex set $S \subseteq V(G) \setminus T$ of size at most $k$ such that in graph $G-S$, no pair of distinct terminals can be connected by two pairwise internally vertex-disjoint paths. Hence each terminal pair can be separated in $G-S$ by removing at most one vertex. The problem is therefore a generalization of (Node) Multiway Cut, which asks for a vertex set for which each terminal is in a different component of $G-S$. We develop a fixed-parameter tractable algorithm for Multiway Near-Separator running in time $2^{O(k \log k)} \cdot n^{O(1)}$. Our algorithm is based on a new pushing lemma for solutions with respect to important separators, along with two problem-specific ingredients. The first is a polynomial-time subroutine to reduce the number of terminals in the instance to a polynomial in the solution size $k$ plus the size of a given suboptimal solution. The second is a polynomial-time algorithm that, given a graph $G$ and terminal set $T \subseteq V(G)$ along with a single vertex $x \in V(G)$ that forms a multiway near-separator, computes a $14$-approximation for the problem of finding a multiway near-separator not containing $x$. The talk is based on joint work with Bart Jansen.
16:00 Coffee break
In the Exact Matching problem, we are given a graph with edges colored red and blue, and an integer $k$. The goal is to output a perfect matching with exactly $k$ red edges. After introducing the problem in 1982, Papadimitriou and Yannakakis conjectured it to be NP-hard. Soon after, however, Mulmuley et al. (1987) proved that it can be solved in randomized polynomial time, which makes it unlikely to be NP-hard. Determining whether Exact Matching is in P remains an open problem and very little progress has been made towards that goal. Yuster (2012) showed that relaxing the perfect matching constraint by even one edge results in a simple deterministic polynomial-time algorithm. Generalisations of the problem have been well studied in the literature. Such generalisations include Bounded Color Matchings, where we have multiple color constraints, and Budgeted Matchings, where the color constraints (i.e. cardinality constraints) are replaced by budget constrains. Approximation algorithms have been studied for these problems, but none of the prior results work when enforcing a perfect matching constraint. In this talk I show how we can transform the Exact Matching problem into an optimization problem while keeping the perfect matching requirement and then present approximation algorithms for it.
In this talk, we introduce the robust directed $b$-matching problem. An instance of the problem consists of a directed Graph with costs on each arc and uncertain capacities on each vertex. The uncertainty is expressed through a set of possible capacities, called scenarios, from which one scenario will occur. First, for each vertex, we set a pre-matching which fixes the value of the matching over all outgoing arcs. Then, depending on the pre-matching, the worst-case scenario is chosen. Finally, we set the matching respecting the pre-matching as well as the capacities of each vertex given by the scenario and minimizing the costs. We examine the problems complexity and show NP-hardness. We also explore the influence of the graphs bandwidth on the complexity of the problem and propose a pseudo-polynomial time algorithm on a special subclass of graphs.
The Elementary Shortest Path Problem (ESPP) is the problem of finding an elementary minimum-cost path in a directed graph between two nodes in such a way that each node on the path is visited exactly once. If the arc costs are non-negative, then the problem can be solved efficiently with a polynomial time algorithm like Dijkstra's algorithm. If negative arc costs are allowed, then the problem is NP-hard. We study an exact integer programming formulation for the ESPP and several semidefinite relaxations. We present a solution approach based on these relaxations.
18:30 Dinner and game night
An elastic-degenerate (ED) string $T$ is a sequence of $n$ sets $T[1], ..., T[n]$ containing $m$ strings in total whose cumulative length is $N$. We call $n$, $m$, and $N$ the length, the cardinality and the size of $T$, respectively. The language of $T$ is defined as $\mathcal L(T) = \{S_1 \dots S_n, : \text{$S_i$ in $T[i]$ for all $i$ in $[1, n]$}\}$. ED strings have been introduced to represent a set of closely-related DNA sequences, also known as a pangenome. The basic question we investigate here is: Given two ED strings, how fast can we check whether the two languages they represent have a nonempty intersection? We present recent efficient algorithms (both combinatorial or using more advanced techniques) and lower bound for this problem. We also show that the techniques we develop have applications outside of ED string comparison.
In a planar drawing, no edges may cross. In beyond-planar drawings this requirement is relaxed and only specific kinds of crossings are forbidden. We focus on two kinds of beyond-planar drawings: quasi-planar drawings, where no three edges may cross pairwise. And fan-planar drawings, where for every edge all edges crossing that edge have a common end node, which is on the same side for all crossing edges. Minimizing the number of edge crossings is well established as one of the most important criteria in the comprehensibility of graph drawings. More recent findings suggest that beyond-planar drawings also aid in readability. A beyond-planar crossing number problem asks for the smallest number of crossings a beyond-planar drawing of a graph can have. We augmented the state-of-the-art crossing number ILP formulation and its implementation to solve the fan- and the quasi-planar crossing number. We studied how the restriction to beyond-planar drawings influences the solve time and how much it increases the number of crossings needed.
10:30 Coffee break
Chip design plays a vital role in the modern world, enabling the development of advanced electronic devices. However, the process of chip design involves addressing numerous challenging combinatorial optimization problems. These problems emerge as subproblems that must be solved to efficiently design chips. A chip comprises small functional units known as gates, which perform boolean operations on a fixed number of input signals, such as computing the logical AND operation on two binary signals. These gates are interconnected to implement more complex functions that operate on a variable number of inputs, such as an AND operation on multiple signals. In this talk, the focus will be on the problem of combining gates to compute multiple AND-functions on overlapping sets of inputs. This particular problem presents considerable complexity, depending on the objective and the specific input sets involved. Nonetheless, it can be deconstructed into solving a combinatorial optimization problem that involves constructing binary trees on different sets of vertices and maximizing their overlap. Additionally, some side constraints may need to be considered in the optimization process. The talk aims to delve into algorithms that offer good approximation guarantees for solving this problem and explore practical approaches to address it effectively. Furthermore, the presentation will touch upon similar problems and special cases that arise in the realm of chip design. By shedding light on these topics, I want to give some insights into the underlying challenges of chip design optimization.
The time-critical testing problem arises in a variety of practical settings, including manufacturing and quality control. It involves scheduling a set of tests of the components of a multi-component system on a limited number of testing resources while respecting the time constraints for executing all tests. Each test has a known cost, and each component has a known failure probability. A test of a component establishes the component's state (working or failing) with certainty. The objective is to determine the state of the system (working or failing) at minimum expected cost, where the system state is a function of the component states. In the case where no deadline is imposed, tests can be scheduled sequentially in a non-decreasing order of the ratio of cost to the probability of failure. With a restrictive deadline, however, multiple tests might need to be executed simultaneously, and the problem becomes NP-hard. We develop a branch-and-price algorithm for this scheduling problem, in which batches of simultaneous tests are generated using column generation.
With the rise of alternative modes of transport, such as bike sharing, car sharing, or electric scooters, city authorities face challenges to adapt the existing transport network to meet environmental goals. This trend can be observed in the Network Design Problem literature, where the focus is shifting from the classical single-mode network, e.g., car roads or public transport networks, to multi-modal networks. Planning multi-modal networks provides a number of challenges, with the most notable ones being the larger network size, the enabling of transfers and multi-modal trips, and the explicit consideration of users' decisions during the mode selection process. The resulting complex optimization problem can be tackled by bilevel optimization. However, most existing papers propose heuristic solutions or validate approaches on small toy instances due to the high complexity of the planning problem. Our contribution to this field of research is twofold. First, we analyze exact algorithms for bilevel problems, focusing primarily on MIP formulations, and apply them on a general problem formulation covering multiple problem settings. Thus, we are able to derive recommendations on which algorithm respectively which combination of algorithms to use depending on the type of problem. Also, we compare these algorithms regarding different problem configurations, which has not been done bevor to the best of our knowledge. We conclude that real-world instances cannot be solved with the available general algorithms and software tools. Therefore, we secondly propose tailor-made exact algorithms capable of solving larger instances for two practical applications. In the future, we aim to generalize our approach to a larger set of problems to design algorithms for problem cases that were shown to be uncovered by the current algorithms in our computational study.
12:30 Lunch
Industry talk: Ortec
15:00 Coffee break
Industry talk: Siemens
16:30 Coffee break
Industry talk: TNG
18:00 Break
18:30 Conference dinner
Scheduling jobs with given processing times on identical parallel machines so as to minimize their total completion time is one of the most basic scheduling problems. In this talk, we present a generalization of this classical problem involving scenarios. In our model, a scenario is defined as a subset of a predefined and fully specified set of jobs. The aim is to find a schedule of the whole set of jobs on parallel machines such that the schedule, obtained for the given scenarios by simply skipping the jobs not in the scenario, optimizes the sum of the total completion times over all scenarios. While this is a strongly NP-hard problem for an arbitrary number of scenarios, a polynomial time algorithm can be found for the case with a fixed number $K\geq 3$ if a structural property on the maximum disbalance of an optimal schedule holds. We present recent advances towards the proof, including the unit processing time case where we use Hilbert bases of a related convex cone.

In this ongoing work, we study a scheduling problem with two types of jobs: regular jobs and resource ones. They both have fixed processing times and precedence constraints meaning each job should only start when its predecessors are already finished. Finishing a job at time $t$ incurs a cost which follows an arbitrary function in the completion time of the job. The function can be different for each job. At any point in time, we can process an infinite number of regular jobs but only one resource job without the possibility of preemption. We consider a very generic objective function in which we minimize the sum of the costs. This includes the well-known objectives "weighted total completion time" and "makespan" as special cases. Even then, a reduction from the $3$-partition problem shows that this problem is NP hard.

To solve the problem, we devise a Benders decomposition approach. It fixes the order of the resource jobs via disjunctive arcs in the master problem. Based on these disjunctive arcs, the sub problem schedules the jobs by solving a mincost flow problem. Interestingly, when the decision variables of the master problem are integer, the sub problem is now a max flow problem. In this latter case, to enhance the performance of the Benders algorithm, we use a dedicated flow algorithm. Other structural properties of the problem could lead to even higher performances.

Initially we would like to compare the performance of different algorithms which solve the max flow and the cost flow problems. Then we will study their impact on the performance of our dedicated Benders decomposition approach for different problem instances.

We consider cost allocation for set covering problems. We allocate as much cost to the elements (players) as possible without violating the group rationality condition (no subset of players pays more than covering this subset would cost), and so that the excess vector is lexicographically maximized. This is identical to the well-known nucleolus if the core of the corresponding cooperative game is nonempty, i.e., if some optimal fractional cover is integral. In general we call this the happy nucleolus. Like for the nucleolus, the excess vector contains an entry for every subset of players, not only for the sets in the given set covering instance. Nevertheless we give an explicit family of at most $mn$ subsets such that the happy nucleolus is always completely determined by these subsets; here $m$ and $n$ denote the number of sets and the number of players in our set covering instance. We show that this is the unique minimal such family in a natural sense. Motivated by applications in vehicle routing, we extend the concept of the happy nucleolus and the above result to set covering problems with packing constraints. This allows for a reasonable cost allocation even in a setting where coalitions are competing for scarce resources.
11:00 Coffee break

We investigate the behavior of an integer program when the objective function changes. In particular, we are interested in conditions and sets for which a given integer solution stays optimal. The set of all such objective vectors forms the normal cone at the solution, i.e., for all objectives in this cone, the solution stays optimal. This cone is described by all active constraints of the integer hull. Since the integer hull is not known in general and is expensive to calculate, we are looking for sufficient conditions for an objective to lie in this cone and ways to calculate these cones without calculating the integer hull.

As one example of such a condition, we investigate the $k$-neighborhood. Under the assumption that the polyhedron of feasible binary solutions is a sub- or superset of the $k$-neighborhood polyhedron, we derive an outer or inner approximation of the normal cone. Another approach for calculating the normal cone is by repeatedly adding relevant, feasible solutions to an existing sub-cone. The latter approach requires solving an integer program in each iteration and is thus computationally expensive. We discuss these approaches in the context of analyzing iron-based energy networks.

We consider the following setting of fixed-order vehicle routing problems where $k$ taxis, each of capacity $c$, need to serve $n$ customers on a line. Additionally, there exists a global ordering that determines the order in which the customers in every separate taxi need to be served. The objective is to minimize the time the last customer is served, the sum of the serve times or the total route length. For the latter objective, the problem is equivalent to the offline version of the $k$-server problem. In general, this problem setting generalizes many scheduling problems in which the set of jobs allocated to any machine is always scheduled according to a certain ordering in an optimal solution (e.g. Smith's ratio).

We show NP-hardness of the problem setting for all three considered objective functions. We argue that, perhaps counter-intuitively, considering the fixed-order setting makes finding optimal or even approximate solutions harder. Finally, we give some insights into our ongoing work on approximation algorithms. This is a joint work with Tim Oosterwijk and René Sitters.

We consider a problem in which a linear function is optimized over the set of binary vectors of a given finite length which can be understood as the set of incidence vectors of switches over a discrete time horizon that at each point in time can either be turned on or off. The on-off pattern of each of the switches is subject to a 'simple' practical constraint, namely an upper bound on the total variation, i.e. the number of different consecutive entries. We distinguish between whether the variation is penalized in the objective function or bound by a hard constraint. Since the resulting problem is polynomial-time solvable in either of those cases, we are interested in the way the time-complexity is impacted when for each point in time additional 'simple' combinatorial constraints (for example a selection-type constraint) are imposed, thus enforcing a structure among the switches.
13:00 Lunch
14:00 Activity and dinner

Assume that we have a signal spreading through a weighted connected graph $G = (V, E)$ with the following properties: The signal is sent with a constant velocity $\frac 1 c > 0$ from a unique unknown source node $s \in V$ at a starting time $t_s$. We have the possibility to measure the time $t_v \in \mathbb R$ at which the signal reaches the node $v \in V$ for every node in our graph $G$. So, we have $t_v = t_s + c\ d(s, v)$, where $d(s, v)$ denotes the length of the shortest path from $s$ to $v$. A spread-resolving set is a subset $B \subseteq V$ such that $\forall t,c \in \mathbb R$ with $c > 0$ and $\forall v \neq w \in V$ we have $d(v, B) \neq t\ \mathbb 1 + c\ d(w, B)$, where $d(v, B)$ is the vector of ordered distances from $v$ to every node $b \in B$ and $\mathbb 1$ denotes the vector of ones. With measurements $t_b$ for nodes $b$ in the spread-resolving set $B$ we are able to compute the source node $s$, the time offset $t_s$ and the velocity $c$. The goal of the spread dimension problem is to find a spread-resolving set containing as few nodes as possible.

In this talk we get to know the spread dimension problem and its applications. We will focus on the NP-completeness of this problem and on algorithms that compute spread-resolving sets. In the end we will get an insight into the approximability.

We will discuss proper $q$-colourings of sparse, bounded degree graphs when the maximum degree is near the so-called shattering threshold. This threshold, first identified in the statistical physics literature, coincides with the point at which all known efficient colouring algorithms fail and it has been hypothesized that the geometry of the solution space (the space of proper colourings) is responsible. This hypothesis is a cousin of the Overlap-Gap property of Gamarnik '21. Significant evidence for this picture was provided by Achlioptos and Coja-Oghlan '08, who drew inspiration from statistical physics, but their work only explains the performance of algorithms on random graphs (average-case complexity). We extend their work beyond the random setting by proving that the geometry of the solution space is well behaved for all graphs below the "shattering threshold". This follows from an original result about the structure of uniformly random colourings of fixed graphs. Joint work with François Pirot.
We consider semidefinite programming (SDP) approaches for solving the well-known Satisfiability (SAT) problem and the related maximum-satisfiability (MAX-SAT) problem. SAT is a well-known binary decision problem stemming from computer science with a wide range of applications. It is widely known that SDP is well-suited to approximate (MAX-)2-SAT instances, a class of (MAX-)SAT instances similar to Max-Cut. Our work shows the potential of SDP on (MAX-)3-SAT instances as well, by being competitive with some of the best solvers in the yearly MAX-SAT competition. These results are obtained by a tailored SDP solver and parser, in combination with a branch and cut scheme. Moreover, we elegantly demonstrate a new connection between two previous approaches of combining SDP and SAT. This is joint work with Renata Sotirov.
11:00 Coffee break
The min-max resource sharing problem can be regarded as norm-minimization problem over a convex set with the restriction that the convex set can be accessed only via a linear minimization oracle. An interesting application of this problem is global routing in VLSI design, which is the task of connecting different components on a computer chip with wires. First and foremost, we improve the formerly best-known running time of the general min-max resource sharing problem. Moreover, we provide new structural results. Lastly, we discuss how our algorithms can be applied in practice and present results on industrial microprocessors.
Learning augmented algorithms attempt to combine the advantages of classical worst-case analysis and machine learning approaches and has been a very active area in recent years applying this concept to numerous online problems. In this work we introduce learning augmented algorithms to the online graph coloring problem. We establish a relationship between the structure of any input online graph $G$, and the number of colors that algorithm FirstFit uses for $G$. We then propose a coloring algorithm that extends FirstFit while making use of machine learned predictions, and employ the aforementioned structural result to show that the algorithm is consistent. Finally we show how our algorithm can be robustified by being combined with any classical online algorithm (that disregards the predictions).
12:30 Closing and awards
13:30 Lunch


The registration is closed! Due to the large number of submissions, we unfortunately cannot facilitate a presentation slot for all submissions. If you have registered in time and received a confirmation, you will hear from us shortly!

To register for FRICO 2023, send an email to containing the following information:

  • full name
  • affiliation
  • title of talk
  • short abstract
  • dietary restrictions (if any)
Within five working days, you will receive a confirmation of your submission. If this is not the case, please contact the organizers directly.

The deadline for registration is June 14. Although we aim to welcome as many participants as possible, there is a limit on the number of talks we can accommodate. Talks will be selected based on how closely they adhere to the research interests of FRICO. You will receive a definitive confirmation of your participation shortly after the deadline.

For the collection and processing of personal data necessary for registration, please review our privacy policy.


We recommend the following hotels:

You can also look at: and

Travel information


FRICO 2023 takes place at Eindhoven University of Technology. The campus is at walking distance from the station and city center and offers ample parking space. The conference will take place in the Corona room in Luna. Coffee breaks and lunches are organized at student cafe Hubble, in the same building.

Corona room
Luna building
De Lampendriessen 31
5612 AH  Eindhoven

A map of the campus can be found here.

History of FRICO

Ulrich Pferschy described the origin of the FRICO in the program booklet of the 10th FRICO as follows:

" … However, some people who are taking part in this relaxed exchange of ideas for the first time may ask themselves how such an unstructured event came about. To prevent the formation of false legends, which are sometimes already spread in an hour with wine, I would like to briefly present the official version of the FRICO creation.
In the autumn of 1996, a two-week summer school on the approximation of combinatorial optimization problems took place in Udine (this was the summer school to which the great Papadimitriou had mistakenly arrived a year too early). On the free weekend contained therein, it was obvious to undertake an excursion into the surrounding wine country. At least this was the firm conviction of the two Graz participants, Rüdiger Rudolf and myself. Since excursions of this kind are only half as nice for two (if they are men), we successfully tried to win the apparently like-minded participants Dagmar Handke (Konstanz) and Katja Wolf (Cologne) as companions. The beautiful excursion ended in a trattoria in Cividale with a special local dish, which can best be described as a mixture of potatoes, onions, bacon, and cheese; nothing for a weak stomach. The Frico cheese, a Friulian specialty, gives the dish its name. In a cheerful circle, Katja invited all of us to a Frico dinner in Cologne without knowing what she was starting.
However, it was to take over a year before this invitation could be made more concrete. Now the way from Graz to Cologne is quite long, and as optimizers, we tried to combine the pleasant with the useful. Without further ado, we offered to combine the private visit with a lecture at the then ZPR Cologne. And so that it didn′t look as if the people of Cologne can only listen and have nothing to say themselves, Katja immediately obliged a few "locals" to give further lectures. Thus a one-day workshop had developed in the twinkling of an eye. Although the organizer said: "We can′t call it FRICO", I managed to find the acronym that is known today.
The first FRICO Workshop in 1997 was a great success (in contrast to the Frico dinner in the evening, which was canceled from the program in the following years). This moved Professor Schrader to donate a barrel of Kölsch and to "threaten" to continue organizing this event format on his own if we did not. Of course, we couldn′t afford to let that happen, and so the second FRICO was decided in Graz in 1998.
The rest is history."

Previous FRICOs and best talk awards