My research is in algorithms for discrete problems and computational complexity. What I find intriguing about this area is that many fundamental questions of great importance are still wide open, and that many results go straight against our current intuition. This is not surprising since the area is relatively young; there is still a lot to gain, and I’m happy to contribute to this.
Naturally, the main question is how fast a particular computational problem can be solved exactly in the worst-case setting. What bothers me is that most research oversimplifies this question to determining whether the problem is in P or in NP and commonly refrains from pursuing the original question (that is, the optimal worst-case running time) in the latter case. In my opinion this skips over the most intriguing part: If P does not equal NP, all NP-hard problems must feature some barrier running time beyond which we cannot improve, so what does this mysterious barrier look like, and how does it depend on the problem structure?
On the other hand, if we would like to prove P = NP, we should better first find modest improvements of the currently naïve exhaustive search algorithms: In several cases, obtaining even the smallest improvement of trivial brute-force algorithms almost seems to be as an impossible and a fundamental challenge as the P versus NP question. However, in other cases, surprising and highly significant improvements of algorithms of more than 50 years old were given. This is what I find especially surprising and am happy to contribute to.
My research interests are definitely not restricted to exact algorithms for NP-hard problems; I also wrote papers featuring a substantial amount of (among others) algorithmic game theory, information theory, representation theory, approximation algorithms and online algorithms.