We support the TCS4F — an initiative for reducing the carbon footprint. Learn more...
register log in

14–17 SEPTEMBER 2021

14 September: Tutorial day
15–17 September: Conference

Aachen → online test

Due to the pandemic, Highlights 2021 will happen virtually.

The goal of Highlights conferences is to integrate the community working on logic, games and automata.

IMPORTANT DATES

4 June 2021 AoE: Presentation submission deadline
18 June 2021: Notification
14 September 2021: Tutorial day
15–17 September 2021: Conference

SUMBISSION WEB PAGE

https://easychair.org/conferences/?conf=highlights2021

PREVIOUS EDITIONS

  1. Highlights 2020 (Online, 15–18 September)
  2. Highlights 2019 (Warsaw, 17–20 September)
  3. Highlights 2018 (Berlin, 18–21 September)
  4. Highlights 2017 (London, 12–15 September)
  5. Highlights 2016 (Brussels, 6–9 September)
  6. Highlights 2015 (Prague, 15–18 September)
  7. Highlights 2014 (Paris, 2–5 September)
  8. Highlights 2013 (Paris, 18–21 September)

No proceedings

There are no proceedings. You present your best work, be it published elsewhere or yet unpublished.

Short

The conference is three days long. The contributed talks are around ten minutes.

Cheap

The participation costs are modest. Your computer is easy to reach.

Everybody is doing it!

Let's use the conference model that is so successful in other fields, like mathematics.

Committees

Call for presentations

CALL FOR PRESENTATIONS:

HIGHLIGHTS 2021 is the ninth conference on Highlights of Logic, Games and Automata. It aims at integrating the community working in these fields. Papers from these areas are dispersed across many conferences, which makes them difficult to follow.

A visit to the Highlights conference should offer a wide picture of the latest research in the field and a chance to meet everybody in the community, not just those who happen to publish in one particular proceedings volume.

We encourage you to attend and present your best work, be it already published or not, at the Highlights conference.

SCOPE:

Representative areas include, but are not restricted to:

  • algorithmic model theory
  • automata theory
  • databases
  • game for logic and verification
  • logic
  • verification

IMPORTANT GUIDELINES:

Submissions should take the form of a short abstract describing the content of the presentation and its interest. It should serve as a proposal for a presentation. Hence, submissions should have a single author — the speaker. They can concern any recently published, to be published, or ongoing work of the speaker. We expect you to present your favourite result of the year, so there should be at most one submission per speaker. The abstract should list co-authors, if any.
Optionally, an extended abstract of up to two pages may be attached as a PDF file. 

Submissions will not lead to publications. There are no formal proceedings and we encourage submission of work presented elsewhere.

SUMBISSION WEB PAGE

https://easychair.org/conferences/?conf=highlights2021

Keynote speakers

Tutorials

All times in the program are CEST.

Program

14

Tuesday 14/09/2021

Session T1: Tutorial 1
Chair: Lorenzo Clemente

  • 09:30 – 10:30 Michał Pilipczuk: Old and new advances in model checking first order logic, Part I

    Abstract:

    We are interested in finding a precise correspondence between different logics {\cal L} and properties of classes of graphs {\cal C}, where the correspondence is defined by the tractability of the model-checking problem. Precisely, we say that a logic {\cal L} is {\em{tractable}} on a class of graphs {\cal C} if given a graph G\in {\cal C} and a sentence \varphi of {\cal L}, one can decide whether \varphi holds in G in time f(\varphi)\cdot |G|^c for a computable function f and a constant c. Finding such a {\em{fixed-parameter algorithm}} is often an excellent excuse to introduce a robust toolbox for tackling various problems connected to {\cal L} on classes {\cal C} on which {\cal L} is tractable. Also, understanding the condition that distinguishes classes that are tractable from those that are intractable for {\cal L} usually precisely pinpoints the structural properties that are responsible for tameness of {\cal L} from the point of view of descriptive complexity theory. For instance, classic results show that the MSO logic corresponds in this sense to the property of having bounded treewidth or bounded cliquewidth (depending on the variant of MSO that we speak about). And to what does the First-Order logic FO correspond? This will be the main topic of this tutorial. In the first part of the tutorial we will present classic methods of working with FO on graphs, based on the concepts of {\em{compositionality}} and {\em{locality}}. These are already sufficient to establish tractability of FO on a wide range of structurally-constrained classes, like classes of bounded treewidth, classes of bounded maximum degree, or planar graphs.

Session T2: Tutorial 2
Chair: Lorenzo Clemente

  • 10:45 – 11:45 Michał Pilipczuk: Old and new advances in model checking first order logic, Part II

    Abstract:

    In the second part we will move to classes of uniformly and robustly sparse graphs: classes of bounded expansion and nowhere dense classes. We will see how compositionality and locality works in this context by sketching the model-checking algorithm on classes of bounded expansion.

Session T3: Tutorial 3
Chair: Lorenzo Clemente

  • 12:00 – 13:00 Michał Pilipczuk: Old and new advances in model checking first order logic, Part III

    Abstract:

    Finally, in the third and last part we will take a closer look on the most recent trends in the area. Particularly, we will discuss the notion of {\em{FO transductions}} and how they can be used to construct a structure theory for graphs that is suited for the treatment of FO model-checking and related problems.

Session T4: Tutorial 4
Chair: Jérôme Leroux

  • 14:30 – 15:30 Christoph Haase: A guided tour through Presburger arithmetic and friends, Part I

    Abstract:

    The goal of this tutorial is to give a comprehensive overview over Presburger arithmetic, the first-order theory of the natural numbers with addition and order. To this end, I will present some classical and very recent results that representatively illustrate the many facets of Presburger arithmetic. I will also offer to the audience some concrete technical tools that come in handy when working on lower-level aspects of Presburger arithmetic. The tutorial will consist of three one-hour parts. Part one is concerned with the decidability of Presburger arithmetic, which can be shown by appealing to three algorithmic frameworks: quantifier elimination, generator-based methods and automata-based methods. Each of these frameworks comes with their own advantages and disadvantages, e.g. for showing complexity results or the decidability of extensions of Presburger arithmetic. Besides establishing some complexity results, I will illustrate how choosing the right framework enables one to smoothly show decidability of extensions of Presburger arithmetic with counting quantifiers, Kleene star and restricted divisibility predicates.

Session T5: Tutorial 5
Chair: Jérôme Leroux

  • 15:45 – 16:45 Christoph Haase: A guided tour through Presburger arithmetic and friends, Part II

    Abstract:

    In the second part, I will focus on lower bounds for Presburger arithmetic and some of its extensions. I will review classical constructions such as the Fischer-Rabin trick and also give some of the main ideas underlying a recent breakthrough showing the intractability of so-called short formulas of Presburger arithmetic.

Session T6: Tutorial 6
Chair: Jérôme Leroux

  • 17:00 – 18:00 Christoph Haase: A guided tour through Presburger arithmetic and friends, Part III

    Abstract:

    The third part of the tutorial will see a discussion of the decidability proof of existential Presburger arithmetic extended with full divisibility predicates as well as counting solutions of formulas of parametric Presburger arithmetic. Time permitting, I will discuss concrete applications of those extensions in model theory, automata theory and compiler optimisation.

15

Wednesday 15/09/2021

Session 0: Opening

Session 1: Invited talk 1
Chair: Damian Niwiński

  • 09:30 – 10:30 Karoliina Lehtinen: When a little nondeterminism goes a long way – A survey of good-for-games automata

    Abstract:

    Nondeterminism probably makes your favourite deterministic automata model more expressive, or at least more succinct. This usually comes at the cost of algorithmic complexity. Classes of automata in between deterministic and nondeterministic ones offer different trade-offs with respect to expressivity, succinctness and algorithmic properties. Good-for-games automata, or history deterministic automata, are nondeterministic automata in which nondeterministic choices can be resolved on the fly, without knowledge of the future of a word. They combine some of the succinctness and expressivity of nondeterministic automata with some of the useful algorithmic properties of deterministic automata. In particular, games with winning conditions given by good-for-games automata tend to be simpler to solve than games with nondeterministic winning conditions. In this talk, I will give an overview of good-for-games automata and their relation to various synthesis problems, survey recent developments in the area for different classes of automata and point to some of the hard questions that remain unanswered and some areas that remain unexplored.

Session 2A: Games I
Chair: Patrick Totzke

  • 10:45 – 10:52 Alexander Kozachinskiy: One-to-Two-Player Lifting for Mildly Growing Memory

    Abstract:

    We investigate so-called “one-to-two-player lifting” theorems for infinite-duration two-player games on graphs with zero-sum objectives. These theorems are concerned with questions of the following form. If that much memory is sufficient to play optimally in one-player games, then how much memory is needed to play optimally in two-player games? In 2005, Gimbert and Zielonka (CONCUR 2005) have shown that if no memory is needed in the one-player games, then the same holds for the two-player games. Building upon their work, Bouyer et al.~(CONCUR 2020) have shown that if some constant amount of memory (independent of the size of a game graph) is sufficient in the one-player games, then exactly the same constant is sufficient in the two-player games. They also provide an example in which every one-player game requires only a finite amount of memory (now this amount depends on the size of a game) while some two-player game requires infinite memory. Our main result states the following. If the memory grows just a bit slower (in the one-player games) than in the example of Bouyer et al., then in every two-player game it is sufficient to have finite memory. Thus, our work identifies the exact barrier for the one-to-two-player lifting theorems in a context of finite-memory strategies.
  • 10:52 – 10:59 Pierre Vandenhove: Arena-Independent Finite-Memory Determinacy

    Abstract:

    We study deterministic and stochastic zero-sum games on graphs, which are prevalent tools to model decision-making in presence of an antagonistic opponent. In this setting, an important question is the one of strategy complexity: what kinds of strategies are sufficient or required to play optimally (e.g., randomization or memory requirements)? Our contributions further the understanding of arena-independent finite-memory (AIFM) determinacy, i.e., the study of objectives for which memory is needed, but in a way that only depends on limited parameters of the game graphs. First, we show that objectives for which pure AIFM strategies suffice to play optimally also admit pure AIFM subgame perfect strategies. Second, we show that we can reduce the study of objectives for which pure AIFM strategies suffice in two-player stochastic games to the easier study of one-player stochastic games (i.e., Markov decision processes). Third, we characterize the sufficiency of AIFM strategies through two intuitive properties of objectives. These last two contributions generalize work by Gimbert and Zielonka. This is joint work with Patricia Bouyer, Stéphane Le Roux, Youssouf Oualhadj, and Mickael Randour.
  • 10:59 – 11:06 Shufang Zhu: Finite-Trace and Generalized-Reactivity Specifications in Temporal Synthesis

    Abstract:

    Linear Temporal Logic (\LTL) synthesis aims at automatically synthesizing a program that complies with desired properties expressed in \LTL. Unfortunately it has been proved to be too difficult computationally to perform full \LTL synthesis. There have been two success stories with \LTL synthesis, both having to do with the form of the specification. The first is the \GR approach: use safety conditions to determine the possible transitions in a game between the environment and the agent, plus one powerful notion of fairness, Generalized Reactivity(1), or \GR. The second, inspired by AI planning, is focusing on finite-trace temporal synthesis, with \LTLf (\LTL on finite traces) as the specification language. In this paper we take these two lines of work and bring them together. We first study the case in which we have an \LTLf agent goal and a \GR assumption. We then add to the framework safety conditions for both the environment and the agent, obtaining a highly expressive yet still scalable form of \LTL synthesis. This is a joint work with Giuseppe De Giacomo, Antonio Di Stasio, Lucas M. Tabajara and Moshe Vardi
  • 11:06 – 11:13 Satya Prakash Nayak: Adaptive Strategies for rLTL Games

    Abstract:

    We consider the problem of synthesizing the controllers that are optimal with respect to a quality criterion based on Robust Linear Temporal Logic (rLTL). We address such problems by formulating it as a two-player game, called rLTL game. We show that the current algorithms for rLTL games do not compute optimal controllers. They assume the environment to always act antagonistically, which is often a non-realistic assumption. So is there a way of satisfying the specification ”better” if the environment is not antagonistic? In order to solve this inefficiency, we develop two new notions of strategies. The first notion is that of Adaptive strategies, which, in response to the opponent’s bad choices (i.e., suboptimal choices), adapts to ensure optimality w.r.t. the current stage. The second notion is that of Strongly adaptive strategies, which is a stronger version of the first one that also maximizes the opportunities for the opponent to make bad choices. We show that the computability problem for both types of strategies is not harder than the classical one and can be solved in doubly-exponential time.
  • 11:13 – 11:20 Clément Tamines: Stackelberg-Pareto Synthesis

    Abstract:

    In this talk, we consider the framework of two-player Stackelberg games played on graphs in which Player 0 announces a strategy and Player 1 responds rationally with a strategy that is an optimal response. While it is usually assumed that Player 1 has a single objective, we consider here the new setting where he has several. In this context, after responding with his strategy, Player 1 gets a payoff in the form of a vector of Booleans corresponding to his satisfied objectives. Rationality of Player 1 is encoded by the fact that his response must produce a Pareto-optimal payoff given the strategy of Player 0. We study the Stackelberg-Pareto Synthesis problem which asks whether Player 0 can announce a strategy which satisfies his objective, whatever the rational response of Player 1. For games in which objectives are either all parity or all reachability objectives, we show that this problem is fixed-parameter tractable and NEXPTIME-complete. This problem is already NP-complete in the simple case of reachability objectives and graphs that are trees.This talk presents joint work with Véronique Bruyère and Jean-François Raskin.
  • 11:20 – 11:27 Daniel Hausmann: Uniform Solving for omega-regular Games

    Abstract:

    We show that fixpoint equation systems over arbitrary finite lattices can be solved with a number of iterations that is quasipolynomial in the lattice height and the alternation-depth of the system. As a side-result, we obtain a highly generic progress measure algorithm that solves fixpoint equation systems over finite lattices and uses universal trees to measure progress. Since the winning regions of various kinds of finite-history omega-regular games can be specified by fixpoint equations, our algorithm instantiates to solving e.g. standard parity games, energy parity games, mean-payoff parity games and stochastic parity games (both qualitative and quantitative). The runtime complexities of the instances of the algorithm typically are quasipolynomial in the number of nodes, the number of priorities, and the maximal size of histories of winning strategies. This work is based on the paper “Quasipolynomial Computation of Nested Fixpoints”, which has been published as joint work with Lutz Schröder at TACAS 2021 (extended version available at https://arxiv.org/pdf/1907.07020.pdf).
  • 11:27 – 11:34 Thomas Steeples: Mean-Payoff Games with omega-Regular Specifications

    Abstract:

    Multi-player mean-payoff games are a natural formalism for modelling the behaviour of concurrent and multi-agent systems with self-interested players. Players in such a game traverse a graph, while trying to maximise a mean-payoff function that depends on the plays so generated. As with all games, the equilibria that could arise may have undesirable properties. However, as system designers, we typically wish to ensure that equilibria in such systems correspond to desirable system behaviours, for example, satisfying certain safety or liveness properties. One natural way to do this would be to specify such desirable properties using temporal logic. Unfortunately, the use of temporal logic specifications causes game theoretic verification problems to have very high computational complexity. To this end, we consider \omega-regular specifications, which offer a concise and intuitive way of specifying desirable behaviours of a system. The main result of our work has been the characterisation and complexity bounds for the problem of determining if there are equilibria that satisfy a given \omega-regular specification in a multi-player mean-payoff game in a number of computationally relevant game-theoretic settings.

Session 2B: Automata & languages I
Chair: Luc Segoufin

  • 10:45 – 10:52 Amina Doumane: Regular expressions for languages of tree-width 2 graphs

    Abstract:

    We propose a formalism of regular expressions for languages of tree-width 2 graphs, equivalent to counting MSO.
  • 10:52 – 10:59 Mikołaj Bojańczyk: Star-free expressions for graphs

    Abstract:

    In the study of word languages, first-order logic is more frequently used with the order predicate x > y, and not the successor predicate x=y+1. (For MSO, there is no difference.) For example, the celebrated equivalence of star-free regular languages with first-order logic uses the first-order logic with order. With successor only, first-order logic for words is a less appealing logic. In contrast to words, the graph variant of first-order logic is usually studied with the edge relation E(x,y), which seems to be more in spirit of successor than order. It would be nice to define a variant of first-order logic for graphs, so that the standard notion of first-order logic for words becomes a special case for those graphs which are words. This is the topic of my talk. I will discuss a variant of star-free expressions for graphs, and a variant of first-order logic that is equivalent to these expressions. The variant is first-order logic with separator predicates of the form: “there is a path from vertex s to vertex t, which does not use vertices x1,…,xk”. (This is an infinite family of predicates, with one predicate of arity k+2 for every k.) As further justification for this class of graph languages, it turns out that there is also a characterisation in terms of aperiodic monoids, which works for bounded pathwidth.
  • 10:59 – 11:06 Yvo Meeres: Accepting Directed Acyclic Graphs with the Help of Finite State Automata

    Abstract:

    Analogously to regular string and tree languages, regular directed graph (DAG) languages had been defined in literature. Although called regular, those DAG-languages are way more powerful and, consequently, standard problems have a higher complexity than in the string case. Top-down as well as bottom-up deterministic DAG languages are subclasses of the regular DAG languages. We refine this hierarchy by providing a weaker subclass of the deterministic DAG languages. Obviously, this weaker class exhibits beneficial algorithmic properties.For a DAG-automaton accepting a language in this new DAG language class, a classical finite state automaton (FSA) can be constructed which states enumerate the dangling edges throughout an appropriate run of the DAG-automaton. An edge is called dangling if not yet all of its vertices have been read in the run. This means that an FSA can be used for deciding membership of such DAGs.The motivation behind this joint work with Johannes Blum and Frank Drewes is the transfer of results about regular string languages to graphs. Furthermore, we provide a characterization of the class via the DAG-automaton’s rules.
  • 11:06 – 11:13 Bartek Klin: Orbit-finite-dimensional vector spaces and weighted register automata

    Abstract:

    We develop a theory of vector spaces spanned by orbit-finite sets. Using this theory, we give a decision procedure for equivalence of weighted register automata, which are the common generalization of weighted automata and register automata for infinite alphabets. The algorithm runs in exponential time, and in polynomial time for a fixed number of registers. As a special case, we can decide, with the same complexity, language equivalence for unambiguous register automata, which improves previous results in three ways: (a) we allow for order comparisons on atoms, and not just equality; (b) the complexity is exponentially better; and (c) we allow automata with guessing. This is joint work with Mikołaj Bojańczyk (Warsaw) and Joshua Moerman (Aachen).
  • 11:13 – 11:20 Rafał Stefański: Single-use register automata and orbit-finite monoids for total-order atoms

    Abstract:

    Single-use register automata are a variant of register automata (in the style of Kaminski & Francez) in which every read access to register destroys its contents. Last year at Highlights, I discussed single-use register automata for atoms which can only be compared for equality. This year, I will discuss atoms with total orders. One notable difference between these two kinds of atoms is that with equality only, single-use automata recognise the same languages as orbit-finite semigroups, while in the presence of total order, single-use automata correspond to an interesting subclass of orbit-finite semigroups. This is based on joint work with Mikołaj Bojańczyk and Nathan Lhote.
  • 11:20 – 11:27 Mohnish Pattathurajan: Parikh Theorem for Infinite alphabets

    Abstract:

    We investigate commutative images of languages recognised by register automata and grammars. Semi-linear and rational sets can be naturally extended to this setting by allowing for orbit-finite unions instead of only finite ones. We prove that commutative images of languages of one-register automata are not always semi-linear, but they are always rational. We also lift the latter result to grammars: commutative images of one-register context-free languages are rational, and in consequence commutatively equivalent to register automata. We conjecture analogous results for automata and grammars with arbitrarily many registers. This is a joint work with Piotr Hofman, Marta Juzepczuk, Sławomir Lasota.
  • 11:27 – 11:34 Ayrat Khalimov: Synthesis for Universal Register Automata in (N,<,=)

    Abstract:

    This presentation concerns the problem of reactive synthesis of systems interacting with its environment via letters from an infinite data domain. Register automata and transducers are popular formalisms for specifying and modeling such systems. They extend finite-state automata by introducing registers that are used to store data and to test incoming data against the stored one. Unlike the standard automata, the expressive power of register automata depends on whether they are deterministic, non-deterministic, or universal. Among these, universal register automata suit synthesis best as they can specify request-grant properties and allow for succinct conjunction. Because the synthesis problem from universal register automata is undecidable, researchers studied a decidable variant, called register-bounded synthesis, where additionally a bound on the number of registers in a sought transducer is given. In those synthesis works, however, automata can only compare data for equality, which limits synthesis applications. In this paper, we extend register-bounded synthesis to domains with the linear order ≤ in a nondense domain. To this end, we prove a key Pumping Lemma for so-called constraint sequences—those finitely abstract behaviours of register automata. The Pumping Lemma allows us to reduce synthesis of register transducers to synthesis of classical (register-less) transducers. As a result, we show that register-bounded synthesis from universal register automata over domain (N, ≤) is decidable in 2ExpTime as in the equality-only case, giving the order for free.

Session 3: Poster session

Session 4A: Games II
Chair: Marcin Jurdziński

  • 13:45 – 13:52 Erich Grädel: Semiring Provenance for LFP and Strategy Analysis in Büchi Game

    Abstract:

    We present a case study for the application of semiring semantics for fixed-point formulae to the analysis of strategies in Büchi games. Semiring semantics generalises the classical Boolean semantics by permitting multiple truth values from certain semirings. Evaluating the fixed-point formula that defines the winning region in a given game in an appropriate semiring of polynomials provides not only the Boolean information on who wins, but also tells us how they win and which strategies they might use. The case of Büchi games is of special interest for this approach, not only due to their practical importance, but also because it is the simplest case where the fixed-point definition involves a genuine alternation of a greatest and a least fixed point. We show that, in a precise sense, semiring semantics provide information about all absorption-dominant strategies — strategies that win with minimal effort, and we discuss how these relate to positional and the more general persistent strategies. This information enables further applications such as game synthesis or determining minimal modifications to the game needed to change its outcome.
  • 13:52 – 13:59 Antonio Casares: The Arena-Independent Memory Requirements of Muller Conditions

    Abstract:

    Minimisation of automata is a well studied problem, and in the case of automata on infinite words, it is known that minimising state-based Büchi automata is NP-complete. However, this problem might be easier when considering automata with the accepting condition defined over transitions. Our first contribution is a proof of the NP-completeness of the minimisation of transition-based Rabin automata, as well as proving that transition-based parity automata recognising Muller conditions can be minimised in polynomial time. We also show that the minimal amount of memory required to solve games using a given Muller condition is exactly the size of a minimal Rabin automaton recognising this condition, if we only allow chromatic memories. We deduce that finding the chromatic memory requirements of a Muller condition is NP-complete. We also provide a counter-example to a conjecture by Kopczyński: we require more memory if we only consider chromatic memory structures.
  • 13:59 – 14:06 Guy Avni: Bidding Graph Games with Partially-Observed Budgets

    Abstract:

    In a two-player zero-sum graph game, the players move a token throughout a graph to produce an infinite path, which determines the winner or payoff of the game. {\em Bidding games} are graph games in which the players have budgets and in each turn, an auction determines which player moves the token. The motivation for this work arises from the potential of bidding games to model stateful and ongoing auctions. For example, online advertisement campaigns that run on multiple publishers (e.g., New York Times or Washington Post) who auction their advertisement slots in an ongoing manner (e.g., daily). Previously, bidding games were only studied as full-information games. In most auction domains, however, it is rarely the case that buyers have full information on the budgets of their opponents. We study, for the first time, bidding games in which the budgets of the players are drawn from a probability distribution, thus a player has only partial information on the opponent’s available budget. We stress that unlike partial-information graph games, we consider games in which there is full information regarding the position of the token. We study qualitative and quantitative objectives with the canonical bidding mechanisms. Our most interesting results are for {\em mean-payoff} games with {\em poorman} bidding: whenever a player wins a bidding, the bid is paid to the “bank”. Poorman bidding is interesting in the partial information setting since in the full-information setting, unlike other bidding mechanisms, the optimal payoff a player can guarantee in a mean-payoff poorman-bidding game is strictly increasing with the initial budget. We focus on the case of {\em one-sided} partial information, which is already both surprising and technically challenging. We classify the optimal expected payoff that the partially-informed player can guarantee with any initial budget in any strongly-connected game. In addition, we classify the optimal expected payoff that the fully-informed player can guarantee in a simple strongly-connected game, which reveals surprises. First, the optimal strategy of the fully-informed player reveals immediately her true initial budget. That is, the fully-informed player cannot use the partial information of her opponent to her advantage. Second, contrary to bidding, turn-based, and stochastic games, in partial-information {\em mean-payoff} bidding games the {\em value} does not necessarily exist under pure strategies: there are cases in which the optimal expected payoff that \Max can guarantee is strictly lower than the optimal payoff \Min can guarantee. Joint work with Isma\”el Jecker and {\DJ}or{\dj}e \v{Z}ikeli\’c.
  • 14:06 – 14:13 Jonathan Lenchner: New Progress with a Forgotten Logical Game

    Abstract:

    We describe our rediscovery of an intriguing logical game, introduced by Immerman in 1981, but then, until now, never again mentioned in the literature. The game is played on two sets \mathcal{A} and \mathcal{B} of structures. These multi-structural games generalize Ehrenfeucht-Fraisse games. Whereas Ehrenfeucht-Fraisse games capture the quantifier rank of a first-order sentence, multi-structural games capture the number of quantifiers, in the sense that Spoiler wins the r-round game if and only if there is a first-order sentence \phi with at most r quantifiers, where every structure in \mathcal{A} satisfies \phi and no structure in \mathcal{B} satisfies \phi. We use these games to give a complete characterization of the number of quantifiers required to distinguish linear orders of different sizes, and develop machinery for analyzing structures beyond linear orders.
  • 14:13 – 14:20 Julie Parreaux: Playing Stochastically in Weighted Timed Games to Emulate Memory

    Abstract:

    Weighted timed games are two-player zero-sum games played in a timed automaton equipped with integer weights. We consider optimal reachability objectives, in which one of the players, that we call Min, wants to reach a target location while minimising the cumulated weight. While knowing if Min has a strategy to guarantee a value lower than a given threshold is known to be undecidable (with two or more clocks), several conditions, one of them being the divergence, have been given to recover decidability. In such weighted timed games (like in untimed weighted games in the presence of negative weights), Min may need finite memory to play (close to) optimally. This is thus tempting to try to emulate this finite memory with other strategic capabilities. In this work, we allow the players to use stochastic decisions, both in the choice of transitions and timing delays. We give for the first time a definition of the expected value in weighted timed games, overcoming several theoretical challenges. We then show that, in divergent weighted timed games, the stochastic value is indeed equal to the classical (deterministic) value, thus proving that Min can guarantee the same value while only using stochastic choices, and no memory. This is joint work with Benjamin Monmege and Pierre-Alain Reynier.
  • 14:20 – 14:27 Léonard Brice: Subgame-perfect Equilibria and Negotiation

    Abstract:

    We present a characterization of subgame-perfect equilibria in a large class of infinite-duration multiplayer games on graphs, and we apply it to solve the constrained SPE existence problem in two natural classes, mean-payoff games and parity games. To this end, we introduce the notions of requirements and of negotiation, and we prove that subgame-perfect equilibria are characterized by the requirement that is the least fixed point of the negotiation function. We present methods to compute that least fixed point.

Session 4B: Database theory
Chair: Filip Murlak

  • 13:45 – 13:52 Corentin Barloy: Stackless Processing of Streamed Trees

    Abstract:

    Processing tree-structured data in the streaming model is a challenge: capturing regular properties of streamed trees by means of a stack is costly in memory, but falling back to finite-state automata drastically limits the computational power. We propose an intermediate stackless model based on register automata equipped with a single counter, used to maintain the current depth in the tree. We explore the power of this model to validate and query streamed trees. Our main result is an effective characterization of regular path queries (RPQs) that can be evaluated stacklessly-with and without registers. In particular, we confirm the conjectured characterization of tree languages defined by DTDs that are recognizable without registers, by Segoufin and Vianu (2002), in the special case of tree languages defined by means of an RPQ. This is joint work with Filip Murlak and Charles Paperman.
  • 13:52 – 13:59 Cristina Feier: Characterising Fixed Parameter Tractability of Query Evaluation Over Guarded TGDs

    Abstract:

    We are interested in delineating the cases where Ontology Mediated Queries based on Guarded TGDs and Unions of Conjunctive Queries can be evaluated efficiently, where by efficient we understand fixed-parameter tractability. We study the problem in the most general case, where relational symbols might have unbounded arity. We provide semantic and syntactic characterisations of fixed-parameter tractability of evaluating recursively enumerable classes of such OMQs. On the way to establish the characterisations, we introduce an fpt-reduction from deciding parameterized uniform CSPs to parameterized OMQ evaluation which is of its own interest. The reduction preserves measures which are known to be essential for characterizing the parameterized complexity of uniform CSPs: submodular width (in the case of unrestricted-arity schemas) and treewidth (in the case of bounded-arity schemas). As such, it can be employed to obtain hardness results for evaluation of OMQs based on guarded TGDs both in the unrestricted and in the bounded arity case. Previously, in the case of bounded arity schemas, this has been tackled using a technique requiring full introspection into the construction employed by Grohe.
  • 13:59 – 14:06 Marcin Przybyłko: Answer Counting under Guarded TGDs

    Abstract:

    Tuple-generating dependencies (TGDs) are a prominent formalism for formulating database constraints. A TGD states that if certain facts are true, then certain other facts must be true as well. This can be interpreted in different ways. In ontology-mediated querying, TGDs give rise to ontology languages and are used to derive new facts in addition to those that are present in the database. In a more classical setup that we refer to as querying under constraints, TGDs are used as integrity constraints on the database, that is, a TGD expresses the promise that if certain facts are present in the database, then certain other facts are present as well. In this talk, I will discuss the problem of counting answers to ontology-mediated queries (OMQs) in the context of parameterized complexity theory, with the query being the parameter. I will focus on the case where the ontology is given as a set of guarded TGDs while the actual queries are (unions of) conjunctive queries ((U)CQs)and show how to, in this setting, lift a recent classification due to Dell et al. for UCQs without ontologies and constraints to the world of OMQs. This presentation is based on a joint work with Cristina Feier and Carsten Lutz, published at ICDT 2021.
  • 14:06 – 14:13 Maximilian Merz: Probabilistic Databases under Updates: Boolean Query Evaluation and Ranked Enumeration

    Abstract:

    We consider tuple-independent probabilistic databases in a dynamic setting, where tuples can be inserted or deleted. In this context we are interested in efficient data structures for maintaining the query result of Boolean as well as non-Boolean queries. For Boolean queries, we show how the known lifted inference rules can be made dynamic, so that they support single-tuple updates with only a constant number of arithmetic operations. As a consequence, we obtain that the probability of every safe UCQ can be maintained with constant update time. For non-Boolean queries, our task is to enumerate all result tuples ranked by their probability. We develop lifted inference rules for non-Boolean queries, and, based on these rules, provide a dynamic data structure that allows both log-time updates and ranked enumeration with logarithmic delay. As an application, we identify a fragment of non-repeating conjunctive queries that supports log-time updates as well as log-delay ranked enumeration. This characterisation is tight under the OMv-conjecture. This is joint work with Christoph Berkholz. The presentation is based on our paper at PODS 2021.
  • 14:13 – 14:20 Varun Ramanathan: Efficiently Evaluating Extended CRPQ

    Abstract:

    We study the query evaluation problem for Extended Conjunctive Regular Path Queries (ECRPQ), a powerful class of queries used to extract information from graph databases. Generally, the problem is PSPACE-complete for ECRPQ, so a natural question emerges as to which subclasses of ECRPQ might have tractable query evaluation. We develop a new multi-hypergraph based abstraction of ECRPQs that answers this question in great detail, both for the query evaluation problem as well as its parameterized version.
  • 14:20 – 14:27 Markus L. Schmid: Spanner Evaluation over SLP-Compressed Documents

    Abstract:

    We consider the problem of evaluating regular spanners over compressed documents, i.e., we wish to solve evaluation tasks directly on the compressed data, without decompression. As compressed forms of the documents we use straight-line programs (SLPs) – a lossless compression scheme for textual data widely used in different areas of theoretical computer science and particularly well-suited for algorithmics on compressed data. In data complexity, our results are as follows. For a regular spanner M and an SLP S that represents a document D, we can solve the tasks of model checking and of checking non-emptiness in time O(size(S)). Computing the set [[M]](D) of all span-tuples extracted from D can be done in time O(size(S) |[[M]](D)|), and enumeration of [[M]](D) can be done with linear preprocessing O(size(S)) and a delay of O(depth(S)), where depth(S) is the depth of S’s derivation tree. Note that size(S) can be exponentially smaller than the document’s size |D|; and, due to known balancing results for SLPs, we can always assume that depth(S) = O(log(|D|)) independent of D’s compressibility. Hence, our enumeration algorithm has a delay logarithmic in the size of the non-compressed data and a preprocessing time that is at best (i.e., in the case of highly compressible documents) also logarithmic, but at worst still linear. Therefore, in a big-data perspective, our enumeration algorithm for SLP-compressed documents may nevertheless beat the known linear preprocessing and constant delay algorithms for non-compressed documents.

Session 5: Poster session

Session 6: Invited talk 2
Chair: Mikołaj Bojańczyk

  • 15:30 – 16:30 Rajeev Alur: Synchronization Schemas for Modeling Data Streams

    Abstract:

    We present a type-theoretic framework for data stream processing for real-time decision making, where the desired computation involves a mix of sequential computation, such as smoothing and detection of peaks and surges, and naturally parallel computation, such as relational operations, key-based partitioning, and map-reduce. To unify sequential (ordered) and relational (unordered) data models, we define synchronization schemas as types and series-parallel streams (SPS) as objects of these types. A synchronization schema imposes a hierarchical structure over relational types that succinctly captures ordering and synchronization requirements among different kinds of data items. Series-parallel streams naturally model objects such as relations, sequences, sequences of relations, sets of streams indexed by key values, time-based and event-based windows, and more complex structures obtained by nesting of these. We introduce series-parallel stream transformers (SPST) as a domain-specific language for modular specification of deterministic transformations over such streams. SPSTs provably specify only monotonic transformations allowing streamability, have a modular structure that can be exploited for correct parallel implementation, and are composable allowing specification of complex queries as a pipeline of transformations. We conclude by discussing research directions for both theory of automata, logics, and transducers for distributed stream processing.

Session 7A: Logic I
Chair: Thomas Colcombet

  • 16:45 – 16:52 Lovro Mrkonjić: Elementary Equivalence Versus Isomorphism in Semiring Semantics

    Abstract:

    We study the first-order axiomatisability of finite semiring interpretations or, equivalently, the question whether elementary equivalence and isomorphism coincide for valuations of atomic facts over a finite universe into a commutative semiring. Contrary to the classical case of Boolean semantics, where every finite structure is axiomatised up to isomorphism by a first-order sentence, the situation in semiring semantics is rather different, and depends on the underlying semiring. We prove that for a number of important semirings, including min-max semirings, and the semirings of positive Boolean expressions, there exist finite semiring interpretations that are elementarily equivalent but not isomorphic. The same is true for the polynomial semirings that are universal for the classes of absorptive, idempotent, and fully idempotent semirings, respectively. On the other side, we prove that for other, practically relevant, semirings such as the Viterbi semiring, the tropical semiring, the natural semiring and the universal polynomial semiring N[X], all finite semiring interpretations are first-order axiomatisable, and thus elementary equivalence implies isomorphism. This talk is based on a paper jointly written with Erich Grädel which will be presented at ICALP 2021 (pre-print: https://arxiv.org/abs/2102.05473). During the presentation, I will briefly explain the generalisation of elementary equivalence and isomorphism from classical structures to semiring interpretations and summarise our main results as well as two general techniques to prove axiomatisability or non-axiomatisability of semiring interpretations up to isomorphism.
  • 16:52 – 16:59 Jakub Różycki: On the Expressiveness of Büchi Arithmetic

    Abstract:

    We show that the existential fragment of Büchi arithmetic is strictly less expressive than full Büchi arithmetic of any base, and moreover establish that its Sigma_2-fragment is already expressively complete. Furthermore, we show that regular languages of polynomial growth are definable in the existential fragment of Büchi arithmetic.
  • 16:59 – 17:06 Dylan Bellier: Good-for-Game QPTL: An Alternating Hodges Semantics

    Abstract:

    An extension of QPTL is considered where functional dependencies among the quantified variables can be restricted in such a way that their current values are independent of the future values of the other variables. This restriction is tightly connected to the notion of behavioral strategies in game-theory and allows the resulting logic to naturally express game-theoretic concepts. The fragment where only restricted quantifications are considered, called behavioral quantifications, and can be decided, for both model checking and satisfiability, in 2ExpTime and is expressively equivalent to QPTL, though significantly less succinct.
  • 17:06 – 17:13 Giuseppe Perelli: Behavioral QLTL

    Abstract:

    We introduce Behavioral QLTL, which is a “behavioral” variant of linear-time temporal logic on infinite traces with second-order quantifiers. Behavioral QLTL is characterized by the fact that the functions that assign the truth value of the quantified propositions along the trace can only depend on the past. In other words such functions must be “processes”. This gives to the logic a strategic flavor that we usually associate to planning. Indeed we show that temporally extended planning in nondeterministic domains, as well as LTL synthesis, are expressed in Behavioral QLTL through formulas with a simple quantification alternation. While, as this alternation increases, we get to forms of planning/synthesis in which conditional and conformant planning aspects get mixed. We study this logic from the computational point of view and compare it to the original QLTL (with non-behavioral semantics), and with simpler forms of behavioral semantics.
  • 17:13 – 17:20 Antonio Di Stasio: Two-Stage Technique for LTLf Synthesis Under LTL Assumptions

    Abstract:

    In synthesis, assumption are constraints on the environments that rule out certain environment behaviors. A key observation is that even if we consider system with LTLf goals on finite traces, assumptions need to be expressed considering infinite traces, using LTL on infinite traces, since the decision to stop the trace is controlled by the agent. To solve synthesis of LTLf goals under LTL assumptions, we could reduce the problem to LTL synthesis. Unfortunately, while synthesis in LTLf and in LTL have the same worst-case complexity (both are 2EXPTIME-complete), the algorithms available for LTL synthesis are much harder in practice than those for LTLf synthesis. Recently, it has been shown that in basic forms of fairness and stability assumptions we can avoid such a detour to LTL and keep the simplicity of LTLf synthesis. In this paper, we generalize these results and show how to effectively handle any kind of LTL assumptions. Specifically, we devise a two-stage technique for solving LTLf under general LTL assumptions and show empirically that this technique performs much better than standard LTL synthesis. This is a joint work with Giuseppe De Giacomo, Moshe Vardi and Shufang Zhu. The paper was accepted in the 17th International Conference on Principles of Knowledge Representation and Reasoning, KR 2020.

Session 7B: Verification I
Chair: Corina Cirstea

  • 16:45 – 16:52 Shaun Azzopardi: Incorporating Monitors in Reactive Synthesis without Paying the Price

    Abstract:

    Reactive synthesis has a high complexity, 2EXPTIME, which can make the synthesis of real systems with large complex specifications difficult. However parts of a system may already exist or may be produced manually (e.g. when parts of the specification may be out of the scope for synthesis). Synthesis can then instead be applied on the remaining part of the specification. An interesting problem to consider in this context is what level of knowledge of the other components is required for the new component’s synthesis. We consider pre-existing components in the form of rich symbolic automata environment monitors (a lower-level specification language common in the runtime verification community). The choice of symbolic automata allows parametrised specifications and an exponentially succinct representation. We consider their combination with LTL formulas in a sequential fashion (a monitor triggers the component corresponding to the formula), and/or with repetition (when the synthesised component finishes the monitoring starts again). We introduce the notions of tight realisability and tight controllers for co-safety LTL formulas, allowing the controllers to signal immediately when a formula has been fulfilled for any given trace, to allow for regular repetition. We show that realisable LTL formulas can be synthesised and plugged-in without any knowledge of the monitor, while knowledge of the monitor increases the space of realisable specifications. This work illustrates how LTL synthesis can be used in richer and more imperative contexts, increasing its scope.This work was carried out with Nir Piterman and Gerardo Schneider, and based on work to appear in ATVA 2021.
  • 16:52 – 16:59 Kush Grover: Motion Planning in Unknown Environments

    Abstract:

    Complex mission specifications for robots can be often specified through temporal logics, such as Linear Temporal Logic and its syntactically co-safe fragment, scLTL. Finding trajectories that satisfy such specifications becomes hard if the robot is to fulfil the mission in an initially unknown environment, where neither locations of regions or objects of interest in the environment nor the obstacle space are known a priori. We propose an algorithm that, while exploring the environment, learns important semantic dependencies in the form of a semantic abstraction, and uses it to bias the growth of an RRG graph towards faster mission completion. Our approach leads to finding trajectories that are much shorter than those found by the sequential approach, which first explores and then plans. Simulations comparing our solution to the sequential approach, carried out in 100 randomized office-like environments, show more than 50% reduction in the trajectory length. This work was done in collaboration with F. Barbosa, J. Tumova, and J. Kretinsky.
  • 16:59 – 17:06 Suguman Bansal: Synthesis for Temporal and Satisficing Goals

    Abstract:

    Reactive synthesis from high-level specifications that combine {\em hard} constraints expressed in Linear Temporal Logic (\ltl) with {\em soft} constraints expressed by discounted-sum (DS) rewards has applications in planning and reinforcement learning. An existing approach combines techniques from \ltl synthesis with optimization for the DS rewards but has failed to yield a sound algorithm. An alternative approach combining \ltl synthesis with satisficing DS rewards (rewards that achieve a threshold) is sound and complete for integer discount factors, but, in practice, a fractional discount factor is desired. This work extends the existing satisficing approach, presenting the first sound algorithm for synthesis from \ltl and DS rewards with fractional discount factors. The utility of our algorithm is demonstrated on robotic planning domains.
  • 17:06 – 17:13 Priyanka Golia: Manthan: A Data-Driven Approach for Boolean Function Synthesis

    Abstract:

    Boolean functional synthesis is a fundamental problem in computer science with wide-ranging applications and has witnessed a surge of interest resulting in progressively improved techniques over the past decade. Despite intense algorithmic development, a large number of problems remain beyond the reach of the state of the art techniques. Motivated by the progress in machine learning, we propose Manthan, a novel data-driven approach to Boolean functional synthesis. Manthan views functional synthesis as a classification problem, relying on advances in constrained sampling for data generation, and advances in automated reasoning for a novel proof-guided refinement and provable verification. On an extensive and rigorous evaluation over 609 benchmarks, we demonstrate that Manthan significantly improves upon the current state of the art, solving 356 benchmarks in comparison to 280, which is the most solved by a state of the art technique; thereby, we demonstrate an increase of 76 benchmarks over the current state of the art. Furthermore, Manthan solves 60 benchmarks that none of the current state-of-the-art techniques could solve. The significant performance improvements, along with our detailed analysis, highlights several interesting avenues of future work at the intersection of machine learning, constrained sampling, and automated reasoning. This is joint work with Subhajit Roy and Kuldeep S. Meel. The corresponding paper was published at International Conference on Computer-Aided Verification, CAV, 2020.
  • 17:13 – 17:20 S. Akshay: Knowledge Representations for Efficient Boolean Functional Synthesis

    Abstract:

    Given a relational specification between Boolean inputs and outputs, the goal of Boolean functional synthesis is to synthesize each output as a function of the inputs such that the specification is satisfied. More precisely, given a Boolean formula F (X, I), where X is a vector of outputs and I is a vector of inputs, the Boolean Skolem function synthesis problem (BFnS) asks to compute a Skolem function vector G(I) such that F (G(I), I) holds whenever ∃X F (X, I) holds. This problem has a wide variety of applications ranging from cryptanalysis to certified QBF solving to reactive synthesis. In general, this problem is known to be hard: indeed complexity-theoretic hardness results show polysized and polytime computable Skolem functions are unlikely to exist always. However, algorithms developed using different techniques seem to perform surprisingly well on a wide set of benchmarks. This motivates us to ask if there is structure in the input representation that allows for efficient computation of Skolem functions. This investigation leads us down a rabbithole of knowledge representations, which enable efficient (in size and time) synthesis of Skolem functions. First, it is easy to see that well-known normal forms like BDDs, DNNFs permit efficient synthesis. But we can go significantly beyond this and obtain a semantic (checkable with a SAT solver!) condition on the specification that also allows for efficient Skolem function synthesis. This results in a normal form that is exponentially more succinct than the previous ones and yet permits efficient Skolem function synthesis. Finally, we ask if we may precisely characterize polytime and polysized Skolem function synthesis. Surprisingly, the answer is yes! We develop a normal form that captures exactly this requirement and examine its many properties and possible implications. This work is based on papers presented in FMCAD’2019 and a very recent work accepted at LICS’2021. [LICS’2021] A Normal Form Characterization for Efficient Boolean Skolem Function Synthesis. Long version available at ArXiv: https://arxiv.org/abs/2104.14098[FMCAD’2019] Knowledge Compilation for Boolean Functional Synthesis. Long version available at ArXiv: https://arxiv.org/abs/1908.06275[Coauthors: All this work is joint with Supratik Chakraborty of IIT Bombay. The LICS’21 paper is also with Preey Shah and Aman Bansal, while FMCAD’19 was jointly done with Jatin Arora, Divya Raghunathan, Shetal Shah and S. Krishna. All authors are from IIT Bombay or were at IIT Bombay when a bulk of the work was carried out.]

Session 8: Poster session

16

Thursday 16/09/2021

Session 9: Invited talk 3
Chair: Mahsa Shirmohammadi

  • 09:30 – 10:30 Nutan Limaye: Word polynomials and algebraic circuit lower bounds

    Abstract:

    Polynomials are ubiquitous objects which we encounter in mathematics, life sciences, and computation. An algebraic circuit is a model of computation of polynomials. It consists of addition and multiplication gates, connected through a DAG structure that takes variables and constants as inputs and outputs a polynomial. The size of the algebraic circuits is the number of addition and multiplication gates it uses. One way to understand the complexity of computing polynomials is to obtain bounds on the sizes of the circuits needed to compute them. This complexity theoretical study was initiated by Leslie Valiant in 1979 as a route towards attacking the famous P vs. NP problem. In this talk, we will look at polynomials inspired by automata-theoretic ideas, which we will call word polynomials. We will show how they have helped in proving strong lower bounds. In fact, in our recent work, these polynomials were used to prove the first superpolynomial lower bound for reasonably general models of computation. This talk is based on joint works with Guillaume Lagarde, Guillaume Malod, Srikanth Srinivasan, and Sébastien Tavenas.

Session 10A: Games III
Chair: Jean-Francois Raskin

  • 10:45 – 10:52 Julian Gutierrez: Cooperative Concurrent Games

    Abstract:

    Rational verification is the problem of determining which temporal logic properties will hold in a multi-agent system, under the assumption that agents in the system act rationally, by choosing strategies that collectively form a game theoretic equilibrium. Previous work in this area has largely focussed on non-cooperative games, with Nash equilibrium and a number of variants as the main game-theoretic solution concepts. Here, we extend the rational verification framework to cooperative games, in which players may form coalitions to collectively achieve their goals, and base our study on the core as our basic solution concept. In particular, we have developed the theory and algorithms for rational verification in both deterministic and probabilistic systems, using concurrent game structures (CGS) and concurrent stochastic games (CSG), respectively, as models. We show that if players’ goals are given by LTL formulae, regardless of the type of solution concept (Nash equilibrium or the core) or underlying model (CGS or CSG), solving such multi-player games can be done in 2EXPTIME, but in each case requiring rather different proof techniques.
  • 10:52 – 10:59 Daniele Dell’Erba: Priority Promotion with Parysian Flair

    Abstract:

    We develop an algorithm that combines the advantages of priority promotion – one of the leading approach to solving large parity games in practice – with the quasi-polynomial time guarantees offered by Parys’ algorithm. Hybridising these algorithms sounds both natural and difficult, as they both generalise the classic recursive algorithm in different ways that appear to be irreconcilable: while the promotion transcends the call structure, the guarantees change on each level. We show that an interface that respects both is not only effective, but also efficient.
  • 10:59 – 11:06 Dominik Wojtczak: A Recursive Approach to Solving Parity Games in Quasipolynomial Time

    Abstract:

    Zielonka’s classic recursive algorithm for solving parity games is perhaps the simplest among the many existing parity game algorithms. However, its complexity is exponential, while currently the state-of-the-art algorithms have quasipolynomial complexity. Here, we present a modification of Zielonka’s classic algorithm that brings its complexity down to be in line with previous quasipolynomial-time solutions. This is joint work with Pawel Parys, Karoliina Lehtinen, and Sven Schewe.
  • 11:06 – 11:13 K. S. Thejaswini: A symmetric attractor-decomposition lifting algorithm for parity games

    Abstract:

    Progress-measure lifting algorithms for solving parity games have the best worst-case asymptotic runtime, but are limited by their asymmetric nature, and known from the work of Czerwiński et al. (2018) to be subject to a matching quasi-polynomial lower bound inherited from the combinatorics of universal trees. Parys (2019) has developed an ingenious quasi-polynomial McNaughton- Zielonka-style algorithm, and Lehtinen et al. (2019) have improved its worst-case runtime. Jurdziński and Morvan (2020) have recently brought forward a generic attractor-based algorithm, formalizing a second class of quasi-polynomial solutions to solving parity games, which have runtime quadratic in the size of universal trees. First, we adapt the framework of iterative lifting algorithms to computing attractor-based strategies. Second, we design a symmetric lifting algorithm in this setting, in which two lifting iterations, one for each player, accelerate each other in a recursive fashion. The symmetric algorithm performs at least as well as progress-measure liftings in the worst-case, whilst bypassing their inherent asymmetric limitation. Thirdly, we argue that the behaviour of the generic attractor-based algorithm of Jurdzinski and Morvan (2020) can be reproduced by a specific deceleration of our symmetric lifting algorithm, in which some of the information collected by the algorithm is repeatedly discarded. This yields a novel interpretation of McNaughton-Zielonka-style algorithms as progress-measure lifting iterations (with deliberate set-backs), further strengthening the ties between all known quasi-polynomial algorithms to date. This is joint work with Marcin Jurdziński, Rémi Morvan and Pierre Ohlmann.
  • 11:13 – 11:20 James C. A. Main: Time Flies When Looking out of the Window: Timed Games with Window Parity Objectives

    Abstract:

    The window mechanism was introduced by Chatterjee et al. to reinforce mean-payoff and total-payoff objectives with time bounds in two-player turn-based games on graphs. It has since proved useful in a variety of settings, including parity objectives in games and both mean-payoff and parity objectives in Markov decision processes.We study window parity objectives in timed automata and timed games: given a bound on the window size, a path satisfies such an objective if, in all states along the path, we see a sufficiently small window in which the smallest priority is even. We show that checking that all time-divergent paths of a timed automaton satisfy such a window parity objective can be done in polynomial space, and that the corresponding timed games can be solved in exponential time. This matches the complexity class of timed parity games, while adding the ability to reason about time bounds. We also consider multi-dimensional objectives and show that the complexity class does not increase. To the best of our knowledge, this is the first study of the window mechanism in a real-time setting.This is joint work with Mickael Randour and Jeremy Sproston.
  • 11:20 – 11:27 Pierre Ohlmann: Generic algorithm for parity and mean payoff games

    Abstract:

    In this talk, we present linear graphs for generic positional objectives in the context of infinite duration two player games over finite graphs such as the parity or mean payoff objectives. These were originally designed to capture the new quasi-polynomial value iteration algorithms for parity games. We show that this framework also allows to formalize other families of algorithms such as strategy-improvement algorithms, or (in the context of parity games), attractor-based algorithms.
  • 11:27 – 11:34 Maximilian Weininger: Comparison of Algorithms for Simple Stochastic Games

    Abstract:

    Simple stochastic games are turn-based 2.5-player zero-sum games with a reachability objective. The problem is to compute the winning probability as well as the optimal strategies of both players. In this work, we compare the three known classes of algorithms — value iteration, strategy iteration and quadratic programming — both theoretically and practically. Further, we suggest several improvements for all algorithms, including the first approach based on quadratic programming that avoids transforming the stochastic game to a stopping one. Our extensive experiments show that these improvements can lead to significant speed-ups. We implemented all algorithms in PRISM-games 3, thereby providing the first implementation of quadratic programming for solving simple stochastic games. This is joint work with Jan Kretinsky, Emanuel Ramneantu and Alexander Slivinskiy. It has been published at GandALF 2020 (https://arxiv.org/abs/2009.10882v1) and the respective presentation (https://www.youtube.com/watch?v=kEvJg7xJOv4) won the best video award.

Session 10B: Verification II
Chair: Krishna S.

  • 10:45 – 10:52 Jakob Piribauer: Positivity-hardness results for Markov decision processes

    Abstract:

    In this talk, I give a brief overview of a series of novel reductions from the Positivity problem for linear recurrence sequences to various decision problems on Markov decision processes (MDPs). The Positivity problem is a famous number-theoretic problem whose decidability status has been open for many decades. A proof that a problem is Positivity-hard, i.e. at least as hard as the Positivity problem, establishes that the problem exhibits the same inherent mathematical difficulty as the Positivity problem and that it can most likely not be solved with known techniques. The reductions I present are obtained by constructing MDP-gadgets that encode linear recurrence sequences. These gadgets can easily be adjusted depending on the problem on MDPs under consideration. This flexible proof technique is then applied to establish Positivity-hardness of weight-bounded reachability problems, of termination problems for one-counter MDPs, of problems concerning the satisfaction probability of energy objectives, of variants of the stochastic shortest path problem, and of the model-checking problem of frequency-LTL among others. The results of this talk have partly been published in joint work with Christel Baier at ICALP 2020 and are presented in my recently submitted PhD thesis.
  • 10:52 – 10:59 Tobias Meggendorfer: Risk-aware Reachability and SSP

    Abstract:

    We present a novel approach to risk in reachability and stochastic shortest path (SSP) problems on Markov chains and decision processes. In particular, we consider the time to arrive at the goal states and the total accumulated cost on the way, respectively, to quantify risk. Motivated by our previous work, we employ conditional value-at-risk (CVaR) as measure of risk. In essence, CVaR quantifies risk as the expectation of the worst p-quantile. We present ongoing work on the computational complexity of the respective decision problems, structure of optimal strategies, and, in contrast to our previous work, also a value iteration algorithm.
  • 10:59 – 11:06 Nikolas Mählmann: SAT via Recursive Backdoors

    Abstract:

    Due to its expressiveness the SAT problem of checking whether a formula of propositional logic is satisfiable is widely used as a general problem solving framework. While SAT is computationally hard on general formulas, there exist restricted tractable classes of formulas, for which SAT solving is known to be efficient. A backdoor of a CNF formula phi to a tractable class C of formulas is a set B of variables of phi that when assigned reduces phi to a formula from C. Backdoors of small size or with a good structure, lead to efficient solutions for SAT. In our paper we introduce the new notion of recursive backdoors, which generalize backdoors and exploit the structure of formulas that can be recursively split into independent parts by partial assignments. Our generalization is motivated by the observation that independent or loosely connected components are common among real world SAT instances and many industrial solvers use value caching heuristics or component analysis in order to exploit this property. The quality of a recursive backdoor is measured by its recursive backdoor depth. Recursive backdoors of bounded depth can contain an unbounded number of variables and allow for efficient SAT solving if they are given as an input to the solver. The challenge therefore lies in the detection of recursive backdoors. For the base class of empty formulas C0, we show that recursive backdoor detection is fixed-parameter tractable and yields new tractability results for SAT.
  • 11:06 – 11:13 Pascal Baumann: Context-Bounded Verification of Liveness Properties for Multithreaded Shared-Memory Programs

    Abstract:

    We study context-bounded verification of liveness properties of multi-threaded, shared-memory programs, where each thread can spawn additional threads. Our main result shows that context-bounded fair termination is decidable for the model; context-bounded implies that each spawned thread can be context switched a fixed constant number of times. Our proof is technical, since fair termination requires reasoning about the composition of unboundedly many threads each with unboundedly large stacks. In fact, techniques for related problems, which depend crucially on replacing the pushdown threads with finite-state threads, are not applicable. Instead, we introduce an extension of vector addition systems with states (VASS), called VASS with balloons (VASSB), as an intermediate model; it is an infinite-state model of independent interest. A VASSB allows tokens that are themselves markings (balloons). We show that context bounded fair termination reduces to fair termination for VASSB. We show the latter problem is decidable by showing a series of reductions: from fair termination to configuration reachability for VASSB and thence to the reachability problem for VASS. For a lower bound, fair termination is known to be non-elementary already in the special case where threads run to completion (no context switches). We also show that the simpler problem of context-bounded termination is 2EXPSPACE-complete, matching the complexity bound – and indeed the techniques – for safety verification. Additionally, we show the related problem of fair starvation, which checks if some thread can be starved along a fair run, is also decidable in the context-bounded case. The decidability employs an intricate reduction from fair starvation to fair termination. Like fair termination, this problem is also non-elementary.
  • 11:13 – 11:20 David Purser: Porous invariants

    Abstract:

    We introduce the notion of porous invariants for multipath (or branching/nondeterministic) affine loops over the integers; that is, invariants defined using Presburger arithmetic/semi-linear sets and subclasses. These invariants are not necessarily convex, and can in fact contain infinitely many ‘holes’. Nevertheless, we show that in many cases such invariants can be automatically synthesised, and moreover can be used to settle (non-)reachability questions for various interesting classes of affine loops and target sets.Joint work with Engel Lefaucheux, Joël Ouaknine, and James Worrell
  • 11:20 – 11:27 Nicola Gigante: BLACK: a fast and robust tool for LTL satisfiability checking

    Abstract:

    We present BLACK (Bounded Lᴛʟ sAtisfiability ChecKer), a recently developed tool for the satisfiability checking of Linear Temporal Logic. Based on the SAT-encoding of a recently developed one-pass tree-shaped tableau by Reynolds, the tool supports both LTL and LTL+Past, both on finite and infinite models. We present the tool by briefly discussing the underlying tableau and its SAT encoding, and we compare it with other state-of-the-art competitors.
  • 11:27 – 11:34 Corto Mascle: Responsibility and verification: Importance value in temporal logics

    Abstract:

    We aim at measuring the influence of the nondeterministic choices of a part of a system on its ability to satisfy a specification. For this purpose, we apply the concept of Shapley values to verification as a means to evaluate how important a part of a system is. The importance of a component is measured by giving its control to an adversary, alone or along with other components, and testing whether the system can still fulfill the specification. We study this idea in the framework of model-checking with various classical types of linear-time specification, and propose several ways to transpose it to branching ones. In the case of branching specifications, we are lead to a notion of tree games and study some tree language classes such that tree games with those languages as objectives are determined. This is joint work with Christel Baier, Florian Funke, Simon Jantsch, Stefan Kiefer and Karoliina Lehtinen. A paper was published at LICS 2021, but the talk will go further. A full version is available on Arxiv: https://arxiv.org/abs/2102.06655.

Session 11: Poster session

Session 12A: Logic II
Chair: Erich Grädel

  • 13:45 – 13:52 Alexander Rabinovich: On Uniformization in the Full Binary Tree

    Abstract:

    We consider Variants of Uniformization Problem for the Monadic Second Order Logics over the full binary tree.
  • 13:52 – 13:59 Richard Wilke: Inquisitive Monadic Second-Order Logic

    Abstract:

    Inquisitive semantics is a branch of logic where formulae are evaluated against a set of relational structures over the same domain rather than against a single structure as in classical logics. Such a set can be viewed as an information state, where the variety in the structures encodes uncertainty of the propositions. In a logic with inquisitive semantics, not only statements about structures can be formulated but a formula can also query if the available information suffices to settle whether a proposition holds, or not. Formulae of this kind are referred to as questions. An information state settles a question if all of its structures agree on its truth. Inquisitive semantics can be applied to many different logics and in the present work we focus on inquisitive first-order logic (InqBQ) introduced by Ciardelli. A core property of this logic, called persistency, states that whenever a formula holds in an information state, it also holds in all of its refinements (i.e. its subsets). In this talk we introduce inquisitive monadic second-order logic (InqMSO) a natural extension of InqBQ which is closed under negation and therefore does not satisfy persistency. Our main goal is to understand InqFO as a fragment of InqMSO, hence we present a syntactical fragment of InqMSO which is equivalent to InqBQ. We further show that there are persistent InqMSO-formulae which are not expressible in InqBQ. This is joint work with Erich Grädel.
  • 13:59 – 14:06 Aliaume Lopez: A monotone variant of the Gaifman Locality Theorem

    Abstract:

    We introduce a monotone variant of the Gaifman Locality Theorem in the line of the existential variant following the work of Martin Grohe and Stefan Wöhrle. Namely, over well-behaved classes of finite structures, we prove that every monotone sentence is equivalent to a positive boolean combination of basic local sentences. This work is motivated by the study of preservation theorems over classes of finite structures. In classical model theory, preservation theorems characterize first-order definable sets enjoying some semantic property as those definable in a suitable syntactic fragment; a well-known instance is the Loś-Tarski Theorem, stating that a first-order sentence ϕ is preserved under extensions on all structures –i.e., A |= ϕ and A is an induced substructure of B implies B |= ϕ– if and only if it is equivalent to an existential sentence. However, preservation theorems stemming from model theory usually fail when restricted to the class of finite structures and in fact, the only case where a classical preservation theorem was shown to hold on all finite structures is Rossman’s Theorem. The search for classes of finite structures where the Loś-Tarski Theorem holds has lead to an active area of finite model theory. We show that the monotone Gaifman Locality Theorem can be seen as a weaker form of Loś-Tarski Theorem and demonstrate its use in the construction of classes where the preservation theorem holds.
  • 14:06 – 14:13 Amaldev Manuel: An Algebraic Characterisation of First-Order Logic with Neighbour

    Abstract:

    We give an algebraic characterisation of first-order logic with the neighbour relation, on finite words. For this, we consider languages of finite words over alphabets with an involution on them. The natural algebras for such languages are involution semigroups. To characterise the logic, we define a special kind of semidirect product of involution semigroups, called the locally hermitian product. The characterisation theorem for FO with neighbour states that a language is definable in the logic if and only if it is recognised by a locally hermitian product of an aperiodic commutative involution semigroup, and a locally trivial involution semigroup. We then define the notion of involution varieties of languages, namely classes of languages closed under Boolean operations, quotients, involution, and inverse images of involutory morphisms. An Eilenberg-type correspondence is established between involution varieties of languages and pseudovarieties of involution semigroups. The results are to be presented in LICS ’21.
  • 14:13 – 14:20 Denis Kuperberg: Positive first-order logic on words

    Abstract:

    I will present FO+, a restriction of first-order logic where the alphabet is partially ordered, and letters are required to appear positively. We will ask a syntax versus semantics question: FO+-definable languages are monotone in the letters, but can every FO-definable monotone language be defined in FO+ ? On general structures, Lyndon’s theorem gives such a characterization for monotone FO, but it is known to fail on finite structures. We will see that it also fails on finite words, giving a much simpler proof for the failure of Lyndon’s theorem on finite structures. Finally we will see that FO+-definability is undecidable for regular languages on ordered alphabets.
  • 14:20 – 14:27 Nicole Schrader: First-Order Logic with Connectivity Operators

    Abstract:

    Connectivity is one of several properties that cannot be expressed in first-order logic (FO). We introduce a new logic, called FO+Conn, that has atomic operators Conn_k that can express connectivity in undirected graphs when up to k vertices (that we can quantify) have been deleted. We study the expressive power of the new logic and the complexity of its model-checking problem.

Session 12B: Automata & languages II
Chair: Wojciech Czerwiński

  • 13:45 – 13:52 Stefan Hoffmann: Primitive Permutation Groups and Extremal Synchronizing Automata

    Abstract:

    Primitive permutation groups play a prominent role in permutation group theory since they have been introduced by \’Evariste Galois in his famous solution of the unsolvability of polynomial equations of degree five or higher~\cite{Neumann2011}. An automaton is completely reachable, if every subset of states is reachable from the whole set of states. This notion was introduced by Volkov \& Bondar~\cite{BondarV16} as a strengthening of the notion of a synchronizing automaton, where an automaton is synchronizing if at least one singleton set is reachable from the whole state set. Synchronizing automata have a wide range of applications and the \v{C}ern\’y conjecture, stating that any n-state synchronizing automaton has a synchronizing word of length at most (n-1)^2, is among the most famous open problems from combinatorial automata theory~\cite{Vol2008}. Now, every automaton whose transition monoid contains a primitive permutation group on its state set and a letter of deficiany one is completely reachable, and this in fact characterizes primitivity, a result published recently~\cite{Hoffmann21}. The sync-maximal groups of degree n have been introduced~\cite{Hoffmann21} by the demand that for any non-permutation of deficiency one the minimal automaton for the set of synchronizing words of an associated automaton has size 2^n - n, i.e., is extremal in this respect. It has been recently discovered that this in fact gives another characterization of the primitive permutation groups, i.e., sync-maximal permutation groups are precisely the primitive permutation groups~\cite{HoffmannA}. This extremal property on the set of synchronizing words is also shared by families of slowly synchronizing automata, i.e., automata for which a shortest synchronizing word has quadratic length. It is conjectured that primitive permutation groups are also intimately connected to slowly synchronizing automata, a topic of on-going work~\cite{HoffmannB}.
  • 13:52 – 13:59 Dmitry Chistikov: Subcubic Certificates for CFL Reachability

    Abstract:

    The context-free language (CFL) reachability problem on graphs, as well as a closely related problem of language emptiness for pushdown automata, are core problems in interprocedural program analysis and model checking, respectively. Both can be solved in cubic time but, despite years of efforts, there are no truly sub-cubic algorithms known for either. We study the related certification task: given a problem instance, are there small and efficiently checkable certificates for the existence and for the non-existence of a path (or language non-emptiness and emptiness, respectively)? We show that, in both scenarios, there exist succinct certificates (O(n^2) in the size of the problem) and these certificates can be checked in subcubic (matrix multiplication) time. A natural question is whether faster algorithms for CFL reachability or PDA emptiness will lead to faster algorithms for other combinatorial problems such as SAT. As a consequence of our certification results, we show that there cannot be a fine-grained reduction from SAT to CFL reachability or PDA emptiness for a conditional lower bound stronger than n^\omega, unless the nondeterministic strong exponential-time hypothesis (NSETH) fails. This is joint work with Rupak Majumdar (MPI-SWS, Germany) and Philipp Schepper (CISPA, Germany). A preprint is available at https://arxiv.org/abs/2102.13095 .
  • 13:59 – 14:06 Martin Zimmermann: A Bit of Nondeterminism Makes Pushdown Automata Expressive and Succinct

    Abstract:

    We study the expressiveness and succinctness of good-for-games pushdown automata (GFG-PDA) over finite words, that is, pushdown automata whose nondeterminism can be resolved based on the run constructed so far, but independently of the remainder of the input word. We prove that GFG-PDA recognise more languages than deterministic PDA (DPDA) but not all context-free languages (CFL). This class is orthogonal to unambiguous CFL. We further show that GFG-PDA can be exponentially more succinct than DPDA, while PDA can be double-exponentially more succinct than GFG-PDA. We also study GFGness in visibly pushdown automata (VPA), which enjoy better closure properties than PDA, and for which we show GFGness to be EXPTIME-complete. GFG-VPA can be exponentially more succinct than deterministic VPA, while VPA can be exponentially more succinct than GFG-VPA. Both of these lower bounds are tight. Finally, we study the complexity of resolving nondeterminism in GFG-PDA. Every GFG-PDA has a positional resolver, a function that resolves nondeterminism and that is only dependant on the current configuration. Pushdown transducers are sufficient to implement the resolvers of GFG-VPA, but not those of GFG-PDA. GFG-PDA with finite-state resolvers are determinisable.
  • 14:06 – 14:13 Petra Wolf: Generalizing Synchronizing Words to Push-Down Automata

    Abstract:

    For deterministic finite automata, a synchronizing word is a word that maps the entire state set to one state. The question if a deterministic finite automaton admits a software reset in the form of a so-called synchronizing word can be answered in polynomial time. We extend this algorithmic question to deterministic automata beyond finite automata. There are several interpretations of what synchronizability should mean for deterministic push-down automata. Here, a synchronizing word w does not only map all states to the same state but also fulfills some conditions on the stack content of each run after reading w. We consider three types of these stack constraints: after reading w, the stack (1) is empty in each run, (2) contains the same sequence of stack symbols in each run, or (3) contains an arbitrary sequence which is independent of the other runs. We prove that for push-down automata the question of synchronizability becomes undecidable even when looking at deterministic one-counter automata. This is also true for another classical mild extension of regularity, namely that of deterministic one-turn push-down automata. However, when we combine both restrictions, we arrive at scenarios with a PSPACE-complete (and hence decidable) synchronizability problem. Likewise, we arrive at a decidable synchronizability problem for (partially) blind deterministic counter automata. We further show that in contrast to general deterministic push-down automata, it is decidable for deterministic visibly push-down automata whether there exists a synchronizing word with each of these stack constraints, more precisely, the problems are in EXP. Under the constraint (1), the problem is even in PTIME. For the sub-classes of deterministic very visibly push-down automata, the problem is in PTIME for all three types of constraints. This talk is based on joined work together with Henning Fernau and Tomoyuki Yamakami.
  • 14:13 – 14:20 Philip Offtermatt: Continuous One-Counter Automata

    Abstract:

    We study the reachability problem for continuous one-counter automata, COCA for short. In such automata, transitions are guarded by upper and lower bound tests against the counter value. Additionally, the counter updates associated with taking transitions can be (non-deterministically) scaled down by a nonzero factor between zero and one. Our three main results are as follows: (1) We prove that the reachability problem for COCA with global upper and lower bound tests is in NC2; (2) that, in general, the problem is decidable in polynomial time; and (3) that it is decidable in the polynomial hierarchy for COCA with parametric counter updates and bound tests. We thus give the first efficient conservative approximation for the reachability problem for SOCA and parametric SOCA.This is joint work with Michael Blondin, Tim Leys, Filip Mazowiecki and Guillermo A. Pérez.
  • 14:20 – 14:27 Sparsa Roychowdhury: Hole Bounded Reachability in Multi-Stack Pushdown Systems – with and without Time!

    Abstract:

    Boolean programs with multiple recursive threads can be captured as pushdown automata with multiple stacks. This model is Turing complete, and hence, one is often interested in analyzing a restricted class that still captures useful behaviors. In this talk, we present a new class of underapproximations for multi-stack pushdown automata called hole-bounded behaviors. We compare the expressiveness of the hole-bounded underapproximation with the existing underapproximations. Then we solve the hole-bounded reachability problem of multi-stack pushdown automata, i.e., checking if there exists a hole-bounded behavior that is accepted by the MPDA, by developing an algorithm that uses efficient fix-point computations. We also propose an algorithm that produces witness of reachability, i.e., when the reachability algorithm returns true, the witness algorithm produces the sequence of transitions as a witness of reachability. We further extend this work to the timed setting. Timed multi-stack pushdown automata are extensions of multi-stack pushdown automata with real-valued variables called clocks. The stacks of timed multi-stack pushdown automata are also aged in the sense that they can track the amount of time an element spent inside them. We show that hole bounded reachability of timed multi-stack pushdown automata with closed guards can be solved by extending our earlier algorithm.All these algorithms are implemented in a tool named BHIM (Bounded Hole reachability In Multi-stack pushdown systems). We illustrate its applicability by generating a set of relevant benchmarks and examining its performance. As an additional takeaway, BHIM solves the binary reachability problem in pushdown automata.Part of this work is published under the name “Revisiting Underapproximate Reachability for Multipushdown Systems” in TACAS’20 (https://doi.org/10.1007/978-3-030-45190-5_21)This work is done in collaboration with, Prof. Akshay S (IIT Bombay, India), Prof. Paul Gastin (ENS Paris-Sacley, France), and Prof. Krishna S (IIT Bombay, India).

Session 13: Poster session

Session 14A: Logic III
Chair: Igor Walukiewicz

  • 15:30 – 15:37 Yde Venema: Succinct graph representations of mu-calculus formulas

    Abstract:

    Given the status of the modal mu-calculus as a universal specification language, the computational complexity of its model checking and satisfiability problems is of central importance. To determine the complexity of a proposed algorithm for one of these problems, one needs a sensible measure of the size of the formula that is (part of) the input to the algorithm. Different size measures have been used, depending on how precisely such a formula is represented in the input: (1) the length of the formula, corresponding to a representation of the formula as a string or syntax tree; (2) subformula size, corresponding to a representation of the formula as the directed acyclic graph of its subformulas; and (3) closure size, corresponding to a similar representation of a formula via its (Fischer-Ladner) closure. The choice between these representations is non-trivial because the subformula size of a formula may be exponentially smaller than its length, and, as was shown by Bruse, Friedmann & Lange, its closure size may be exponentially smaller than its subformula size. Consequently, complexity results about the mu-calculus may be suboptimal when expressed in terms of subformula size, in the sense that a stronger version of the result holds when formulated in terms of closure size. In other words, it is desirable to design algorithms that operate on a representation of a formula that is based on its closure. In the literature different frameworks have been used to represent mu-calculus formulas; examples include alternating tree automata and hierarchical equation system. In both cases the mathematically fundamental structure underlying the representation is essentially a graph, whose nodes are labelled with logical connectives and priorities. Likewise, the parity games that feature in model checking algorithms generally are based on an arena which is some kind of product of a graph representing the formula and the model where it is evaluated. In all of these settings the graph representing the formula can be based on its syntax tree, its subformula dag or its closure graph. We make this graph structure explicit and call the resulting concept a “parity formula”. One can view these parity formulas as a natural generalisation of mu-calculus formulas themselves, alternating tree automata, hierarchical equation systems; and of the formula component of model checking games. However, parity formulas have a very simple mathematical structure, which allows for a straightforward and unambiguous definition of its size and its index (alternation depth). We provide two new results about the representation of mu-calculus formulas as parity formulas: 1) A common assumption in the literature on the mu-calculus is that one may assume, without loss of generality, that formulas are clean, or well-named, in the sense that bound variables are disjoint from free variables, and each bound variable determines a unique subformula. We show that this assumption may lead to an exponential blow-up in terms of closure-size. This means that, if one is interested in optimal complexity results, it can not be assumed that the input formula is clean. 2) To the best of our knowledge, all representations of mu-calculus formulas known from the literature, are suboptimal in one way or another: they are based on the subformula dag, presuppose this cleanness assumption, or use a priority function yielding an unnecessarily big index. Our main result is a construction that provides, for every mu-calculus formula, an equivalent parity formula that is based on its closure of the given formula and has an index that exactly corresponds to its alternation depth. The proof of this result is non-trivial. (joint work with Clemens Kupke and Johannes Marti)
  • 15:37 – 15:44 Guillermo Menéndez: Uniform interpolation from cyclic proofs: the case of modal mu-calculus

    Abstract:

    Uniform interpolation was first established for the modal mu-calculus by D’Agostino and Hollenberg via a combination of semantic and syntactic methods. A natural question is whether a purely syntactical proof of this result, in the style of Pitts’ seminal work on uniform interpolation for intuitionistic logic, can be produced. We answer the question in the positive by showing how to construct uniform interpolants from cyclic derivations in an annotated goal-oriented proof system introduced by Jungteerapanich and Stirling. This is joint work with Bahareh Afshari and Graham E. Leigh.
  • 15:44 – 15:51 Johannes Marti: Focus-style proof systems and Craig interpolation for the alternation-free mu-calculus

    Abstract:

    We introduce a cut-free sequent calculus for the alternation-free fragment of the modal mu-calculus. This system allows both for infinite and for finite, circular proofs and uses a simple focus mechanism to control the unravelling of fixpoints along infinite branches. We show that the proof system is sound and complete for the set of guarded valid formulas of the alternation-free \mu-calculus.We then use this proof system to show that the alternation-free fragment of the modal mu-calculus has the Craig interpolation property. Our proof of Craig interpolation then follows Maehara’s method. The interpolant of an implication between two formulas is computed by induction on the complexity of a finite circular proof of the implication.(joint work with Yde Venema)
  • 15:51 – 15:58 Clemens Kupke: On the Size of Disjunctive Formulas in the \mu-calculus

    Abstract:

    Joint work with Johannes Marti and Yde Venema. A key result in the theory of the modal \mu-calculus is the disjunctive normal form theorem by Janin & Walukiewicz, stating that every \mu-calculus formula is semantically equivalent to a so-called disjunctive formula. These disjunctive formulas have various nice properties, including a linear-time solution of their satisfiability problem (in terms of subformula-size of the formula). It is therefore an interesting question what the best normalisation procedure is for rewriting a formula \xi into an equivalent disjunctive formula \xi^d of minimal size.The best normalisation constructions that are known from the literature are automata-theoretic in nature and consist of a guarded transformation, constructing an equivalent guarded alternating automaton from a \mu-calculus formula, followed by a Simulation Theorem stating that any such alternating automaton can be transformed into an equivalent non-deterministic one. Given the analysis of Bruse, Friedmann & Lange, both of these transformations are exponential constructions, making the best normalisation procedure doubly exponential. Our key contribution presented here shows that the two parts of the normalisation procedure can be integrated, leading to a procedure that is single-exponential in the closure size of the formula. Related Work: In spirit, our approach is closely related to the work by Lange & Friedmann on tableaux for unguarded \mu-calculus formulas but their work paper is not providing a derivation of a disjunctive normal form. Similarly our automata theoretic result could be obtainable from a more general result by Vardi on the two-way \mu-calculus – but there a key definition appears to make the implicit assumption that the given \mu-calculus formula is clean whereas our own recent work on measuring the size of a \mu-calculus formula demonstrates that cleaning a formula can lead to an exponential blow-up in (closure) size.
  • 15:58 – 16:05 Marie Fortin: How undecidable are HyperLTL and HyperCTL*?

    Abstract:

    Temporal logics for the specification of information-flow properties are able to express relations between multiple executions of a system. The two most important such logics are HyperLTL and HyperCTL*, which generalise LTL and CTL* by trace quantification. It is known that this expressiveness comes at a price, i.e., satisfiability is undecidable for both logics. We settle the exact complexity of these problems, showing that both are in fact highly undecidable: we prove that HyperLTL is satisfiability complete for \Sigma_1^1 and HyperCTL* satisfiability is complete for \Sigma_1^2. These are significant increases over the previously known lower bounds and the first upper bounds. To prove \Sigma_1^2 membership for HyperCTL*, we prove that every satisfiable HyperCTL* formula has a model that is equinumerous to the continuum, the first upper bound of this kind. We prove this bound to be tight. Finally, we show that the membership problem for every level of the HyperLTL quantifier alternation hierarchy is \Pi_1^1-complete. This is joint work with Louwe B. Kuijer, Patrick Totzke and Martin Zimmermann.

Session 14B: Algebra
Chair: Markus Lohrey

  • 15:30 – 15:37 Nikhil Balaji: Cyclotomic Identity Testing and Applications

    Abstract:

    We consider the cyclotomic identity testing (CIT) problem: given a polynomial~f(x_1,\ldots,x_k), decide whether f(\zeta_n^{e_1},\ldots,\zeta_n^{e_k}) is zero, where \zeta_n = e^{2\pi i/n} is a primitive complex n-th root of unity and e_1,\ldots,e_k are integers, represented in binary. When~f is given by an algebraic circuit, we give a randomized polynomial-time algorithm for CIT assuming the generalised Riemann hypothesis (GRH), and show that the problem is in {\coNP} unconditionally. When f is given by a circuit of polynomially bounded degree, we give a randomized {\NC} algorithm. In case f is a linear form we show that the problem lies in~{\NC}. Towards understanding when CIT can be solved in deterministic polynomial-time, we consider so-called diagonal depth-3 circuits, i.e., polynomials f=\sum_{i=1}^m g_i^{d_i}, where g_i is a linear form and d_i a positive integer given in unary. We observe that a polynomial-time algorithm for CIT on this class would yield a sub-exponential-time algorithm for polynomial identity testing. However, assuming GRH, we show that if the linear forms~g_i are all identical then CIT can be solved in polynomial time. Finally, we use our results to give a new proof that equality of compressed strings, i.e., strings presented using context-free grammars, can be decided in randomized~{\NC}. This is joint work with Sylvain Perifel, Mahsa Shirmohammadi and James Worrell, to be presented at ISSAC 2021.
  • 15:37 – 15:44 Aniket Murhekar: Near-optimal complexity bounds for fragments of the Skolem Problem

    Abstract:

    Given a linear recurrence sequence (LRS), specified using the initial conditions and the recurrence relation, the Skolem problem asks if zero ever occurs in the infinite sequence generated by the LRS. Despite active research over last few decades, its decidability is known only for a few restricted subclasses, by either restricting the order of the LRS (upto 4) or by restricting the structure of the LRS (e.g., roots of its characteristic polynomial). In this paper, we identify a subclass of LRS of arbitrary order for which the Skolem problem is easy, namely LRS all of whose characteristic roots are (possibly complex) roots of real algebraic numbers, i.e., roots satisfying x^d = r for r real algebraic. We show that for this subclass, the Skolem problem can be solved in \NP^\RP. As a byproduct, we implicitly obtain effective bounds on the zero set of the LRS for this subclass. While prior works in this area often exploit deep results from algebraic and transcendental number theory to get such effective results, our techniques are primarily algorithmic and use linear algebra and Galois theory. We also complement our upper bounds with a \NP lower bound for the Skolem problem via a new direct reduction from 3-CNF-SAT, matching the best known lower bounds. This is joint work with S. Akshay, Nikhil Balaji, Rohith Varma, and Nikhil Vyas, and was accepted for publication at STACS 2020.
  • 15:44 – 15:51 Klara Nosan: On the Computation of the Algebraic Closure of Finitely Generated Groups of Matrices

    Abstract:

    We investigate the complexity of computing the Zariski closure of a finitely generated group of matrices. The Zariski closure was previously shown to be computable by Derksen, Jeandel and Koiran, but the termination argument for their algorithm appears not to yield any complexity bound. Here we follow a different approach and obtain a bound on the degree of the polynomials that define the closure. Our bound shows that the closure can be computed in elementary time. We describe several applications of this result, e.g., concerning quantum automata and quantum universal gates. We also obtain an upper bound on the length of a,strictly increasing chain of linear algebraic groups, all of which are generated over a fixed number field.This work is in collaboration with Amaury Pouly, Mahsa Shirmohammadi and James Worrell, and is submitted and currently under review.
  • 15:51 – 15:58 Arka Ghosh: Solvability of orbit-finite systems of linear equations

    Abstract:

    We discuss solvability of systems of linear equations over orbit-finitely generated vector spaces and their duals. More specifically, given equivariant orbit-finite sets B and C, an equivariant matrix M with rows B and columns C (i.e., M is an equivariant function from B×C to rationals), and a vector v indexed by B, the problem asks whether there is a solution of the system of equations Mx = v. We consider two variants of this problem. In the first variant we ask for a solution which is non-zero only on finitely many coordinates, while M is unrestricted. In the second variant we assume rows of the matrix M to be non-zero only on finitely many coordinates, and we ask for an unrestricted (though finitely supported) solution. We show decidability of both of the variants by showing that the first variant is reducible to solvability of a finite system of linear equations, and the second variant is reducible to the first variant.This is a work in progress, joint with Piotr Hofman and Sławomir Lasota.
  • 15:58 – 16:05 Matthias Naaf: Computing Least and Greatest Fixed Points in Absorptive Semirings

    Abstract:

    This talk presents results on the computation of both least and greatest solutions of polynomial equation systems over absorptive semirings (with certain completeness and continuity assumptions) such as the tropical semiring, motivated by recent work on semiring provenance analysis of fixed-point logics. Our main result is a closed-form solution that needs only a polynomial number of semiring operations and an infinitary power operation. We prove this by considering (possibly infinite) derivation trees and showing that we only need trees of a certain shape: a reachability prefix with (infinite) deterministic subtrees. This talk is based on a paper submitted to RAMiCS 2021, a preprint is available at https://arxiv.org/abs/2106.00399.

Session 15: Poster session

Session 16: Invited talk 4
Chair: Michael Benedikt

  • 17:00 – 18:00 Balder ten Cate: Unique Characterizations, Concept Learning, and the Homomorphism Lattice

    Abstract:

    Various problems arising in the context of example-driven approaches to query discovery turns out to be intimately related to basic structural properties of the homomorphism lattice of finite structures, such as density, or the existence of duals. I will discuss some of these connections, and highlight relevant results from recent work together with Victor Dalmau.

17

Friday 17/09/2021

Session 17: Invited talk 5
Chair: Christel Baier

  • 09:30 – 10:30 Joël Ouaknine: Holonomic Techniques, Periods, and Decision Problems

    Abstract:

    Holonomic techniques have deep roots going back to Wallis, Euler, and Gauss, and have evolved in modern times as an important subfield of computer algebra, thanks in large part to the work of Zeilberger and others over the past three decades. In this talk, I give an overview of the area, and in particular present a select survey of known and original results on decision problems for holonomic sequences and functions. I also discuss some surprising connections to the theory of periods and exponential periods, which are classical objects of study in algebraic geometry and number theory; in particular, I relate the decidability of certain decision problems for holonomic sequences to deep conjectures about periods and exponential periods, notably those due to Kontsevich and Zagier.

Session 18A: Concurrency
Chair: Piotr Hofman

  • 10:45 – 10:52 A. R. Balasubramanian: The Cut-off Problem for Petri Nets

    Abstract:

    Given a Petri net \net and two markings M, M', the cut-off problem is to decide whether for all sufficiently large n, the marking nM (consisting of n copies of the marking M) can reach the marking nM'. Motivated by questions from rendez-vous protocols, this question was introduced by Horn and Sangnier in who proved that this problem is in \textsf{EXPSPACE}. We show that the cut-off problem for Petri nets is in \textsf{P}. To obtain the polynomial-time upper bound, we consider Petri nets with \emph{continuous semantics} and prove two lemmas that connect the continuous semantics for Petri nets to their standard semantics. This is joint work with Javier Esparza and Mikhail Raskin and has been published at FoSSaCS 2021.
  • 10:52 – 10:59 Alex Dixon: Verifying higher-order concurrency with data automata

    Abstract:

    Using a combination of automata-theoretic and game-semantic techniques, we give a new method for analysing higher-order concurrent programs. Our language of choice is Finitary Idealised Concurrent Algol (FICA). Our first contribution is an automata model over a tree-structured infinite data alphabet, called Split Automata, whose distinctive feature is the separation of control and memory. We show that every FICA term can be translated into such an automaton. Thanks to the structure of split automata, we are able to observe subtle aspects of the underlying game semantics. This enables us to identify a fragment of FICA with iteration and limited synchronisation (but without recursion), for which, in contrast to FICA itself, a variety of verification problems turn out to be decidable. In the presentation, we shall discuss FICA, the construction of Split Automata, some of the decidability results we have shown on our FICA fragment, and the consequences for the analysis of higher-order concurrent programs. The presentation will be of broad interest to those in the fields of game semantics, automata theory, and programming language theory. This is joint work with Ranko Lazic, Andrzej S. Murawski and Igor Walukiewicz.
  • 10:59 – 11:06 Alexandre Vigny: Parameterized Distributed Complexity Theory: A logical approach

    Abstract:

    Parameterized complexity theory offers a framework for a refined analysis of hard algorithmic problems. In this work we follow the approach of parameterized complexity to provide a framework of parameterized distributed complexity.
  • 11:06 – 11:13 Chana Weil-Kennedy: Reconfigurable Broadcast Networks and Asynchronous Shared-Memory Systems are Equivalent

    Abstract:

    We show the equivalence of two models, reconfigurable broadcast networks (RBN) and asynchronous shared-memory systems (ASMS), that were introduced and studied independently. Both RBN and ASMS model distributed systems in which a collection of anonymous, finite-state processes run a common protocol. In RBN, the processes communicate by broadcast: a process can broadcast a message which is received by a subset of the processes. In ASMS, the processes communicate by writing to and reading from a shared register. We show that RBN can simulate ASMS: Given an RBN instance P, we can construct an ASMS instance P’ and a bijection f from configurations of P to certain “good” configurations of P’ satisfying the following property. Given two sets of configurations C,D defined by upper and lower bounds on the number of processes in each state, C can reach D in the RBN P if and only if f(C) can reach f(D) in the ASMS P’. Additionally, ASMS can simulate RBN. Using this simulation equivalence, we transfer results of RBN to ASMS and vice versa. Finally we show that RBN and ASMS can simulate a third similar distributed model called immediate observation Petri nets, allowing some transfer of results. This third model cannot, however, simulate the other two. This is joint work with A. R. Balasubramanian. It will be submitted to GandALF 2021.
  • 11:13 – 11:20 Felix Stutz: Verification of Asynchronous Communication – An Automata-theoretic Perspective on Multiparty Session Types

    Abstract:

    Multiparty session types (MSTs) provide an efficient methodology for specifying and verifying message passing software systems. In the theory of MSTs, a global type specifies the interaction among the roles at the global level. A local specification for each role is generated by projecting from the global type on to the message exchanges it participates in. Whenever a global type can be projected on to each role, the composition of the projections is deadlock free and has exactly the behaviours specified by the global type. The key to the usability of MSTs is the projection operation: a more expressive projection allows more systems to be type-checked but requires a more difficult soundness argument. We generalise the standard projection operation in MSTs. This allows us to model and type-check many design patterns in distributed systems, such as load balancing, that are rejected by the standard projection. The key to the new projection is an analysis that tracks causality between messages. In contrast to standard soundness proofs in the domain of MST, our proofs rely on insights from the automata and language theory to establish correctness. This presentation is based on paper co-authored with Rupak Majumdar, Madhavan Mukund and Damien Zufferey which appeared at CONCUR this year (https://drops.dagstuhl.de/opus/volltexte/2021/14412/pdf/LIPIcs-CONCUR-2021-35.pdf).
  • 11:20 – 11:27 Ramanathan Thinniyam Srinivasan: General Decidability Results for Asynchronous Shared-Memory Programs: Higher-Order and Beyond

    Abstract:

    The model of asynchronous programming arises in many contexts, from low-level systems software to high-level web programming. We take a language-theoretic perspective and show general decidability and undecidability results for asynchronous programs that capture all known results as well as show decidability of new and important classes. As a main consequence, we show decidability of safety, termination and boundedness verification for higher-order asynchronous programs—such as OCaml programs using Lwt—and undecidability of liveness verification already for order-2 asynchronous programs. We show that under mild assumptions, surprisingly, safety and termination verification of asynchronous programs with handlers from a language class are decidable iff emptiness is decidable for the underlying language class. Moreover, we show that configuration reachability and liveness (fair termination) verification are equivalent, and decidability of these problems implies decidability of the well-known “equal-letters” problem on languages. Our results close the decidability frontier for asynchronous programs.
  • 11:27 – 11:34 Wojciech Czerwiński: Reachability in Vector Addition Systems is Ackermann-complete

    Abstract:

    Vector Addition Systems and equivalent Petri nets are a well established models of concurrency. The central algorithmic problem for Vector Addition Systems with a long research history is the reachability problem asking whether there exists a run from one given configuration to another. We settle its complexity to be Ackermann-complete thus closing the problem open for 45 years. In particular we prove that the problem is F_k-hard for Vector Addition Systems with States in dimension 6k, where F_k is the k-th complexity class from the hierarchy of fast-growing complexity classes.
  • 11:34 – 11:41 Jérôme Leroux: The Reachability Problem for Petri Nets is Not Primitive Recursive

    Abstract:

    We present a way to lift up the Tower complexity lower bound of the reachability problem for Petri nets to match the Ackermannian upper bound closing a long standing open problem. We also prove that the reachability problem in dimension 17 is not elementary.

Session 18B: Automata & languages III
Chair: James Worrell

  • 10:45 – 10:52 Antoine Amarilli: Dynamic Membership for Regular Languages

    Abstract:

    We study the dynamic membership problem for regular languages: fix a language L, read a word w, build in time O(|w|) a data structure indicating if w is in L, and maintain this structure efficiently under letter substitutions on w. We consider this problem on the unit cost RAM model with logarithmic word length, where the problem always has a solution in O(log |w| / log log |w|) per operation. We show that the problem is in O(log log |w|) for languages in an algebraically-defined, decidable class QSG, and that it is in O(1) for another such class QLZG. We show that languages not in QSG admit a reduction from the prefix problem for a cyclic group, so that they require Ω(log |w| / log log |w|) operations in the worst case; and that QSG languages not in QLZG admit a reduction from the prefix problem for the multiplicative monoid U 1 = {0, 1}, which we conjecture cannot be maintained in O(1). This yields a conditional trichotomy. We also investigate intermediate cases between O(1) and O(log log |w|). Our results are shown via the dynamic word problem for monoids and semigroups, for which we also give a classification. We thus close gaps that were left open by the 1997 JACM paper of Skovbjerg Frandsen, Miltersen, and Skyum. Our results on monoids are summarized in Figure 1. This talk will present our results on the dynamic word problem for monoids and semigroups and on the dynamic membership problem for regular languages. It follows our work with Louis Jachiet and Charles Paperman which will appear at ICALP’21: http://arxiv.org/ abs/2102.07728.
  • 10:52 – 10:59 Arthur Jaquard: A Complexity Approach to Tree Algebras: the Polynomial Case

    Abstract:

    Tree algebras in many of their forms, such as clones, hyperclones, operads, etc, as well as other kind of algebras, are infinitely sorted: the carrier is a multi sorted set indexed by a parameter that can be interpreted as the number of variables or hole types. Finite such algebras – meaning when all sorts are finite – can be classified depending on the asymptotic size of the carrier sets as function of the parameter, that we call the complexity of the algebra. This naturally defines the notions of algebras of bounded, linear, polynomial, exponential or doubly exponential complexity… At Highlights 2020, I presented a characterisation of the languages recognized by algebras of bounded complexity (to be presented at ICALP 2021). This time, I would like to present similar results for algebras of polynomial complexity. This is joint work with Thomas Colcombet.
  • 10:59 – 11:06 Eugenio Colla: Ramsey theory and aperiodic monoids

    Abstract:

    One of the most important theorems in the algebraic theory of automata is due to Schützenberger. It states that a rational language is star-free if and only if its syntactic monoid is finite and aperiodic.In Ramsey theory, two very important classes of monoids are those of Ramsey monoids and \YY(M)-controllable monoids. If a monoid M belongs to one of these two classes then a coloring statement holds for M^*, the free semigroup of words on M. While these notions were formalized recently, they generalize celebrated theorems by Hindman, Carlson, Gowers, Furstenberg, and Katznelson.  These notions found applications in set theory, Ramsey spaces, descriptive set theory, and Banach space theory.In joint work with Claudio Agostini, we prove that Ramsey monoids and \YY(M)-controllable monoids are aperiodic and that large classes of finite aperiodic monoids are Ramsey or \YY(M)-controllable. For example, Ramsey monoids are exactly those finite, aperiodic monoids such that \{aM: a\in M\} is linearly ordered by inclusion.                        These results seem to show a new connection between Ramsey theory and automata.The aim of this talk is to present our main results to stimulate a dialogue between these two areas.
  • 11:06 – 11:13 Ismaël Jecker: Decomposing Permutation Automata

    Abstract:

    A deterministic finite automaton (DFA) is composite if its language can be decomposed into an intersection of languages of smaller DFAs. Otherwise, it is prime. This notion of primality was introduced by Kupferman and Mosheiff in 2013, and while they proved that we can decide whether a DFA is composite, the precise complexity of this problem is still open, with a doubly-exponential gap between the upper and lower bounds. In this work, we focus on permutation DFAs, i.e., those for which the transition monoid is a group. We provide an NP algorithm to decide whether a permutation DFA is composite, and show that the difficulty of this problem comes from the number of non-accepting states of the instance: we give a fixed-parameter tractable algorithm with the number of rejecting states as the parameter. Moreover, we investigate the class of commutative permutation DFAs. Their structural properties allow us to decide compositionality in NLOGSPACE, and even in LOGSPACE if the alphabet size is fixed. Despite this low complexity, we show that complex behaviors still arise in this class: we provide a family of composite DFAs each requiring polynomially many factors with respect to its size. We also consider the variant of the problem that asks whether a DFA is k-factor composite, that is, decomposable into k smaller DFAs, for some given integer k. We show that, for commutative permutation DFAs, restricting the number of factors makes the decision computationally harder, and yields a problem with tight bounds: it is NP-complete. Finally, we show that in general, this problem is in PSPACE, and it is in LOGSPACE for DFAs with a singleton alphabet.
  • 11:13 – 11:20 Louis-Marie Dando: A Kleene-Schützenberger Theorem for Pre-Rational Monoids.

    Abstract:

    The Kleene-Schützenberger[3] theorem establishes a fundamental link be- tween automata and expressions. Numerous generalisations of this result exist in the literature. In particular, lifting this result to a weighted setting has been widely studied, and positive results are known for graded monoids[2]. However, graded monoids do not allow for idempotent elements, which causes difficulties when trying to encompass the setting of two-way automata, which was the start- ing point of this investigation. The result was known on the Boolean case for free inverse monoids[1], that captures both two-way and tree walking automata, but (to the best of our knowledge) without extending it to the weighted setting. In the present work, we are combining both research directions and consider weighted extensions over a class of monoid that we call pre-rational, that cap- ture free inverse monoids. But as is usual when considering weighted two-way automata, we need to handle infinite sums, and we therefore restrict weights to rationally additive semirings. This class encompasses positive rationals (with infinity), the tropical semiring, or rational languages over a finite alphabet, and ensures the correctness of our various sum manipulations. The main result is the extension of the Kleene-Schützenberger theorem to automata and expressions over pre-rational monoids weighted over rationally ad- ditive semirings. This allows us to have regular expressions describing weighted two-way or tree-walking automata. In particular, we get a set of expressions de- scribing the relations realised by nondeterministic two-way transducers, which was one of the motivations for this work. This work was done in collaboration with Nicolas Baudru, Nathan Lhote, Benjamin Monmege, Pierre-Alain Reynier and Jean-Marc Talbot, within the ANR DeLTA. References [1] Anne Dicky and David Janin. Two-way automata and regular languages of overlapping tiles. Fundamenta Informaticae, 142:1–33, 2015. [2] Jacques Sakarovitch. Elements of Automata Theory. Cambridge University Press, 2009. [3] Marcel-Paul Schützenberger. On the definition of a family of automata. Information and Control, 4:245–270, 1961.
  • 11:20 – 11:27 Antonio Abu Nassar: Semantic Symmetry in Transducers

    Abstract:

    Letter-to-letter transducers are a standard formalism for modeling reactive systems. Often, two transducers that model similar systems differ locally from one another, by behaving similarly, up to permutations of the input and output letters within “rounds”. In this work, we introduce and study notions of simulation by rounds and equivalence by rounds of transducers. In our setting, words are partitioned to consecutive subwords of length k, called rounds. Then, a transducer T_1 is k-round simulated by transducer T_2 if, intuitively, for every input word x, we can permute the letters within each round in x, such that the output of T_2 on the permuted word is itself a permutation of the output of T_1 on x. Finally, two transducers are k-round equivalent if they simulate each other. We solve two main decision problems, namely whether T_2 k-round simulates T_1 (1) when k is given as input, and (2) for an existentially quantified k. We demonstrate the usefulness of the definitions by applying them to process symmetry: a setting in which a permutation in the identities of processes in a multi-process system naturally gives rise to two transducers, whose k-round equivalence corresponds to stability against such permutations.
  • 11:27 – 11:34 Sougata Bose: One-way Resynchronizability of Word Transducers

    Abstract:

    The one-way definability problem for word transducers asks whether a given two-way transducer is equivalent to some one-way transducer. This problem is motivated by the fact that a one-way transducer can process the input one letter at a time, whereas a two-way transducer needs to store the entire input. This problem has been studied in the classical semantics and is undecidable in general, but becomes decidable when restricted to functional transducers. In this talk, we consider a variant of the one-way definability problem in the origin semantics. The origin semantics for transducers was proposed in 2014 by Bojanczyk and led to various decidability results for problems that are undecidable in the classical semantics. We show that it is decidable whether a two-way word transducer is “close” to a one-way transducer in the origin semantics up to some distortions of origins defined by a bounded, regular resynchronizers. We call this the one-way resynchronizability problem and we characterize the class of “one-way resynchronizable” transducers using a property of the runs of the transducer, called “inversions”, and a structural property of the origin graphs produced by the transducer, called “cross-width”. Furthermore, a two-way transducer can be checked for absence of inversions, which also shows that the problem is decidable. This is joint work with Krishna S., Anca Muscholl, and Gabriele Puppis presented at FoSSaCS 2021.
  • 11:34 – 11:41 Emmanuel Arrighi: On the Complexity of Intersection Non-emptiness for Star-Free Language Classes

    Abstract:

    This is a joint work with Henning Fernau, Stefan Hoffmann, Markus Holzer, Ismaël Jecker, Mateus de Oliveira Oliveira and Petra Wolf.In the Intersection Non-emptiness problem, we are given a list of finite automata A1, A2, …, Am over a common alphabet Σ as input, and the goal is to determine whether some string w ∈ Σ∗ lies in the intersection of the languages accepted by the automata in the list. We analyze the complexity of the Intersection Non-emptiness problem under the promise that all input automata accept a language in some level of the dot-depth hierarchy, or some level of the Straubing-Thérien hierarchy. Automata accepting languages from the lowest levels of these hierarchies arise naturally in the context of model checking. We identify a dichotomy in the dot-depth hierarchy by showing that the problem is already NP-complete when all input automata accept languages of the levels B0 or B1/2 and already PSPACE-hard when all automata accept a language from the level B1 . Conversely, we identify a tetrachotomy in the Straubing-Thérien hierarchy. More precisely, we show that the problem is in AC0 when restricted to level L0; complete for L or NL, depending on the input representation, when restricted to languages in the level L1/2; NP-complete when the input is given as DFAs accepting a language in L1 or L3/2; and finally, PSPACE-complete when the input automata accept languages in level L2 or higher. Moreover, we show that the proof technique used to show containment in NP for DFAs accepting languages in L1 or L3/2 does not generalize to the context of NFAs. To prove this, we identify a family of languages that provide an exponential separation between the state complexity of general NFAs and that of partially ordered NFAs. To the best of our knowledge, this is the first superpolynomial separation between these two models of computation.

Session 19: Poster session

Session 20A: Finite models
Chair: Szymon Toruńczyk

  • 14:00 – 14:07 Kuldeep S. Meel: Model Counting meets F_0 Estimation

    Abstract:

    Constraint satisfaction problems (CSP’s) and data stream models are two powerful abstractions to capture a wide variety of problems arising in different domains of computer science. Developments in the two communities have mostly occurred independently and with little interaction between them.In this work, we seek to investigate whether bridging the seeming communication gap between the two communities may pave the way to richer fundamental insights. To this end, we focus on two foundational problems: model counting for CSP’s and computation of zeroth frequency moments (F_0) for data streams.Our investigations lead us to observe striking similarity in the core techniques employed in the algorithmic frameworks that have evolved separately for model counting and F_0 computation. We design a recipe for translation of algorithms developed for F_0 estimation to that of model counting, resulting in newalgorithms for model counting. We then observe that algorithms in the context of distributed streaming can be transformed to distributed algorithms for model counting. We next turn our attention to viewing streaming from the lens of counting and show that framing F_0 estimation as a special case of \#DNF counting allows us to obtain a general recipe for a rich class of streaming problems, which had been subjected to case-specific analysis in prior works. In particular, our view yields a state-of-the art algorithm for multidimensional range efficient F_0 estimation with a simpler analysis.(Joint work with A. Pavan, N. V. Vinodchandran, Arnab Bhattacharya. The corresponding paper will appear at PODS 2021).
  • 14:07 – 14:14 Dominik D. Freydenberger: The theory of concatenation over finite models

    Abstract:

    We propose FC, a new logic on words that combines finite model theory with the theory of concatenation – a first-order logic that is based on word equations. Like the theory of concatenation, FC is built around word equations; in contrast to it, its semantics are defined to only allow finite models, by limiting the universe to a word and all its factors. As a consequence of this, FC has many of the desirable properties of FO on finite models, while being far more expressive than FO[<]. Most noteworthy among these desirable properties are sufficient criteria for efficient model checking, and capturing various complexity classes by adding operators for transitive closures or fixed points. Not only does FC allow us to obtain new insights and techniques for expressive power and efficient evaluation of document spanners, but it also provides a general framework for logic on words that also has potential applications in other areas. This talk is based on a joint work with Liat Peterfreund, to be presented at ICALP 2021.
  • 14:14 – 14:21 Emanuel Kieronski: Finite Model Theory of the Triguarded Fragment and Related Logics

    Abstract:

    The Triguarded Fragment (TGF) is among the most expressive decidable fragments of first-order logic, subsuming both its two-variable and guarded fragments without equality. We show that the TGF has the finite model property (providing a tight doubly exponential bound on the model size) and hence finite satisfiability coincides with satisfiability known to be N2ExpTime-complete. Using similar constructions, we also establish 2ExpTime-completeness for finite satisfiability of the constant-free (tri)guarded fragment with transitive guards.
  • 14:21 – 14:28 Moritz Lichter: Limitations of the Invertible-Map Equivalence

    Abstract:

    The invertible-map equivalence is similar to the well-known Weisfeiler-Leman equivalence but is strictly more expressive. Intuitively, equivalences on tuples are refined iteratively based on linear operators on vector spaces over finite fields. The combination of two publications of Anuj Dawar, Erich Grädel, and Wied Pakusa published at ICALP 2019 and by Moritz Lichter published at LICS 2021 limits the expressive power of the invertible-map equivalence and an equivalent logical characterization: First, for no fixed k, the k-dimensional invertible-map equivalence coincides with isomorphism on all finite graphs. Second, no extension of fixed-point logic by linear-algebraic operators over fields can capture polynomial time.
  • 14:28 – 14:35 Sandra Kiefer: Logarithmic Weisfeiler-Leman Identifies All Planar Graphs

    Abstract:

    The Weisfeiler-Leman (WL) algorithm is a well-known combinatorial procedure for detecting symmetries in graphs that is widely used in graph-isomorphism tests. It proceeds by iteratively computing vertex colours. The number of iterations needed to obtain the final output is crucial for the parallelisability of the algorithm.In my presentation, I would like to present an overview of our recent proof that there is a constant k such that every planar graph can be identified (that is, distinguished from every non-isomorphic graph) by the k-dimensional WL algorithm within a logarithmic number of iterations. This generalises a result due to Verbitsky, who proved the same for 3-connected planar graphs.The number of iterations needed by the k-dimensional WL algorithm to identify a graph corresponds to the quantifier depth of a sentence that defines the graph in the (k+1)-variable fragment C^(k+1) of first-order logic with counting quantifiers. Thus, our result implies that every planar graph is definable by a C^(k+1)-sentence of logarithmic quantifier depth.The discussed result was obtained in collaboration with Martin Grohe. The corresponding paper has been accepted for publication at ICALP 2021 and the full version will appear on arXiv in the next days.
  • 14:35 – 14:42 Felipe Ferreira Santos: Separating LREC from LFP

    Abstract:

    LREC= is an extension of first-order logic with a logarithmic recursion operator. It was introduced by Grohe et al. and shown to capture the complexity class L over trees and interval graphs. It does not capture L in general as it is contained in FPC (fixed-point logic with counting). We show that this containment is strict. In particular, we show that the path systems problem, a classic P-complete problem which is definable in LFP (fixed-point logic) is not definable in LREC=. This shows that the logarithmic recursion mechanism is provably weaker than general least fixed points. The proof is based on a novel Spoiler-Duplicator game tailored for this logic. This is ongoing joint work with Anuj Dawar.
  • 14:42 – 14:49 Benedikt Pago: Choiceless Polynomial Time, Symmetric Circuits, and Cai-Fürer-Immerman graphs

    Abstract:

    Choiceless Polynomial Time (CPT) is currently the only well-established candidate logic for capturing PTIME (the other candidate, Rank logic, was very recently separated from PTIME by Moritz Lichter). So far, no decision problem has been shown to be undefinable in CPT. Hence, our goal is to understand the limits of CPT’s expressive power and to develop tools which could potentially be used to separate it from PTIME. A typical benchmark for logics in PTIME is the isomorphism problem of Cai-Fürer-Immerman (CFI)graphs. This is decidable in polynomial time but it is unknown if CPT can solve every instance of it. We establish a necessary condition for the CPT-definability of the CFI-problem on a given graph class: The existence of certain XOR-circuits which are symmetric with respect to the automorphisms of the graphs in question. As a consequence, one could in principle separate CPT from PTIME by showing that any family of circuits with the required properties cannot be sufficiently symmetric. We hope that existing techniques for the analysis of circuit symmetries will be useful to make progress towards this goal. Another result of our (unpublished) work is a CPT-algorithm that solves the CFI-problem, provided that an XOR-circuit with specific properties is given as part of the input.
  • 14:49 – 14:56 Eugenia Ternovska: Towards Capturing PTIME with no Counting Construct (but with a Version of Hilbert’s Choice Operator Epsilon)

    Abstract:

    The central open question in Descriptive Complexity is whether there is a logic that characterizes deterministic polynomial time (PTIME) on relational structures. In this talk, I introduce my work on this question. I define a logic that is obtained from first-order logic with fixed points, FO(FP), by a series of transformations that include restricting logical connectives and adding a dynamic version of Hilbert’s Choice operator Epsilon. The formalism can be viewed either as an algebra of binary relations or a linear-time modal dynamic logic, where algebraic expressions describing “proofs” or “programs” appear inside the modalities. Many typical polynomial time properties such as cardinality, reachability and those requiring “mixed” propagations (that include linear equations modulo two) are axiomatizable in the logic, and an arbitrary PTIME Turing machine can be encoded. For each fixed Choice function, the data complexity of model checking is in PTIME. However, there can be exponentially many such functions. “Naive evaluations” refer to a version of this model checking procedure where the Choice function variable Epsilon is simply treated as a constant. A crucial question is under what syntactic conditions on the algebraic terms such a naive evaluation works, that is, provides a certain answer to the original model checking problem. The two views of the formalism support application of both automata-theoretic and algebraic techniques to the study of this question.

Session 20B: Learning
Chair: Véronique Bruyere

  • 14:00 – 14:07 Adrien Boiret: Active Learning of Sequential Transducers with Side Information about the Domain

    Abstract:

    Active learning is a setting in which a student queries a teacher, through membership and equivalence queries, in order to learn a language.Performance on these algorithms is often measured in the number of queries required to learn a target, with an emphasis on costly equivalence queries.In graybox learning, the learning process is accelerated by foreknowledge of some information on the target.Here, we consider graybox active learning of subsequential string transducers,where a regular overapproximation of the domain is known by the student.We show that there exists an algorithm to learn subsequential string transducers with a better guarantee on the required number of equivalence queriesthan classical active learning.Joint work with Raphaël Berthon, Guillermo Perez, Jean-Francois Raskin
  • 14:07 – 14:14 Daniel Stan: Learning Union of Integer Hypercubes with Queries

    Abstract:

    We study the problem of actively learning a finite union of integer (axis-aligned) hypercubes over the d-dimensional integer lattice, i.e., whose edges are parallel to the coordinate axes. This is a natural generalization of the classic problem in the computational learning theory of learning rectangles. We provide a learning algorithm with access to a minimally adequate teacher (i.e. membership and equivalence oracles) that solves this problem in polynomial-time, for any fixed dimension d. Over a non-fixed dimension, the problem subsumes the problem of learning DNF boolean formulas, a central open problem in the field. We have also provided extensions to handle infinite hypercubes in the union, as well as showing how subset queries could improve the performance of the learning algorithm in practice.Our problem has a natural application to the problem of monadic decomposition of quantifier-free integer linear arithmetic formulas, which has been actively studied in recent years. In particular, a finite union of integer hypercubes correspond to a finite disjunction of monadic predicates over integer linear arithmetic (without modulo constraints).Our experiments suggest that our learning algorithms substantially outperform the existing algorithms.Joint work with Oliver Markgraf and Anthony W. Lin, accepted for publication at CAV2021.
  • 14:14 – 14:21 Dolav Nitay: Learning of Structurally Unambiguous Probabilistic Grammars

    Abstract:

    The problem of identifying a probabilistic context free grammar has two aspects: the first is determining the grammar’s topology (the rules of the grammar) and the second is estimating probabilistic weights for each rule. Given the hardness results for learning context-free grammars in general, and probabilistic grammars in particular, most of the literature has concentrated on the second problem. In this work we address the first problem. We restrict attention to structurally unambiguous weighted context-free grammars (SUWCFG) and provide a query learning algorithm for strucuturally unambiguous probabilistic context-free grammars (SUPCFG). We show that SUWCFG can be represented using co-linear multiplicity tree automata (CMTA), and provide a polynomial learning algorithm that learns CMTAs. We show that the learned CMTA can be converted into a probabilistic grammar, thus providing a complete algorithm for learning a strucutrally unambiguous probabilistic context free grammar (both the grammar topology and the probabilistic weights) using structured membership queries and structured equivalence queries. This work was presented in AAAI’21, and was co-authored with Dana Fisman and Michal Ziv-Ukelson.
  • 14:21 – 14:28 Gaëtan Staquet: Learning Realtime One-Counter Automata

    Abstract:

    Automata used to model a software or hardware system can be large and tedious to construct. The concept of active automata learning, introduced by Angluin in [1], aims at inferring an automaton in a black-box manner, that is, by only observing runs of the system without having access to its implementation. In this presentation, we focus on learning of realtime one-counter automata. The main idea behind Angluin’s framework is as follows: a learner wants to learn a deterministic finite automaton (DFA) accepting a language L in in-teraction with a teacher who knows a DFA accepting L. The interactions with the teacher are limited to two types of queries: membership queries where the learner asks whether a word belongs to the language L, and equivalence queries where the learner asks whether a conjectured automaton accepts L. If the answer of an equivalence query is negative, the teacher provides a counterexample (a word witnessing non-equivalence). The algorithm called L∗ proposed in [1] uses a data structure called observation table to store the learnt knowledge. For each pair (u, v) of words of this 2-D table, a membership query over uv is asked in order to fill the cell. Once the table satisfies two conditions, namely the table must be closed and consistent, the learner is able to construct a DFA from it and ask an equivalence query about it. If the teacher answers true, then the learning process is done. Otherwise, the learner uses the provided counterexample to refine the observation table and repeats this process. Realtime one-counter automata (ROCA) are deterministic finite automata augmented with a natural-valued counter. On each transition, the allowed counter operations are −1, 0, or +1. A word is accepted by an ROCA if the run ends in an accepting state with a counter value equal to zero, and the counter value never goes below 0 while reading the word. A language L accepted by an ROCA defines a (potentially) infinite deterministic automaton A_L accepting L, obtained by unfolding the counter value. Fortunately, this automaton A_L has a finite ultimately periodic description, i.e., a description with a “prefix” part followed by a part that is repeated ad infinitum. From this periodic description, it is possible to construct an ROCA accepting L. The idea behind our learning algorithm for L is to first learn a sufficiently large prefix of A_L, extract a valid ultimately periodic description from the prefix, and construct an ROCA accepting L from the description. Fahmy and Roos already proposed a learning algorithm for ROCAs in [2] but we are unable to replicate the proofs. Our algorithm shares some ideas with the learning algorithm proposed in [4] for visibly one-counter automata. However the process of learning ROCAs is more difficult since the counter value cannot be derived from the alphabet. In our work with Véronique Bruyère (University of Mons) and Guillermo A. Pérez (University of Antwerp), we propose a new learning algorithm for ROCAs, based on L∗. We still have a teacher and a learner interacting, but, this time, four queries are available: membership and equivalence queries as before, and counter value queries to request the counter value of a word, and partial equivalence queries to check whether the learnt prefix of the automaton A_L is correct. The observation table must now store the results of both membership and counter value queries and encode the prefix of A_L, up to some counter limit l. Once the table is closed and consistent, a DFA is created and a partial equivalence query is asked about it. If the teacher returns a counterexample, the table is refined, without increasing l. If the teacher answers true, then from this correct prefix of A_L, we extract multiple potential ultimately periodic descriptions, construct an ROCA from each of them, and ask equivalence queries for each ROCA. If one of them accepts the target language, then the learning process is done. Otherwise, we increase the counter limit l, use one of the counterexamples to refine the table, and repeat the process. We have developed a prototype tool implementing our learning algorithm. Learning an ROCA can be useful to infer a JSON Schema [3], a formalism used to annotate and validate JSON documents. In order to learn a JSON Schema, the four queries are implemented as follows. Membership queries determine whether a document satisfies the schema, which can be checked with a JSON validator. Counter values are defined by either the number of unmatched list-start symbols [ or the number of unmatched object-start symbols { in an (incomplete) JSON document. For both equivalence and partial equivalence, as in most active learning setups, we employ conformance testing techniques for DFAs in the partial equivalence case and JSON generators in the full equivalence case. [1] Dana Angluin. “Learning Regular Sets from Queries and Counterexamples”. In: Inf. Comput. 75.2 (1987), pp. 87–106. doi: 10.1016/0890-5401(87)90052-6. [2] Amr F. Fahmy and Robert S. Roos. “Efficient Learning of Real Time One-Counter Automata”. In: Algorithmic Learning Theory, 6th International Conference, ALT ’95, Fukuoka, Japan, October 18-20, 1995, Proceedings. Ed. by Klaus P. Jantke, Takeshi Shinohara, and Thomas Zeugmann. Vol. 997. Lecture Notes in Computer Science. Springer, 1995, pp. 25–40. doi: 10.1007/3-540-60454-5_26. [3] JSON Schema. URL: https://json-schema.org [4] Daniel Neider and Christof Löding. Learning visibly one-counter automata in polynomial time. Technical Report AIB-2010-02. RWTH Aachen, 2010.
  • 14:28 – 14:35 Jan Otop: Active Learning Infinite-Word Automata

    Abstract:

    Regular languages can be actively learned with membership and equivalence queries in polynomial time. The learning algorithm, called the L^* algorithm, constructs iteratively the right congruence relation of a given regular language L, and returns the minimal DFA recognizing L. The L^* algorithm has been adapted to various types of automata: tree automata, weighted automata, nominal automata. However, an extension to infinite-word automata has been elusive. I will discuss why extending L^* to infinite-word automata is difficlut. Next, I will present an alternative approach, in which we learn the automaton rather than its language. That is, we consider a hidden target automaton T and the learning algorithm asks queries about the language of T and its structure. The learning algorithm returns an automaton equivalent to T and structurally similar to T. We consider T to be either a Deterministic Buechi Automaton or a LimAvg-automaton.
  • 14:35 – 14:42 Lewis Hammond: Multi-Agent Reinforcement Learning with Temporal Logic Specifications

    Abstract:

    In recent and ongoing work, we have studied the problem of how a group of agents may learn to satisfy temporal logic specifications in unknown, stochastic environments. From a learning perspective these specifications provide a rich formal language with which to capture tasks or objectives, while from a logic and automated verification perspective the introduction of learning capabilities allows for practical applications in large, stochastic, unknown environments. Previous efforts (and in fact all those that consider full linear temporal logic or have correctness guarantees) have focused predominantly on the single-agent, single-objective setting. In contrast, we develop the first multi-agent reinforcement learning technique with convergence and correctness guarantees, even when using function approximators (such as neural networks). Our approach is also novel in its ability to handle lexicographic and linear combinations of specifications alongside standard, scalar utility functions. Based on initial theoretical results, our ongoing work seeks to apply our improved algorithm — Automaton/Logic Multi-Agent (Approximate) Natural Actor-Critic, or ALMA^2NAC — to a range test domains, thoroughly benchmarking it against plausible contenders that range from probabilistic model-checkers to deep multi-agent reinforcement learning algorithms. The proposed presentation will seek to summarise the main results and insights from this line of work, emphasising how learning can be fruitfully applied to settings that combine logic, games, and automata.
  • 14:42 – 14:49 Rajarshi Roy: Learning Linear Temporal Properties from Noisy Data: A MaxSAT Approach

    Abstract:

    We address the problem of inferring descriptions of system behavior using Linear Temporal Logic (LTL) from a finite set of positive and negative examples. Most of the existing approaches for solving such a task rely on predefined templates for guiding the structure of the inferred formula. The approaches that can infer arbitrary LTL formulas, on the other hand, are not robust to noise in the data. To alleviate such limitations, we devise two algorithms for inferring concise LTL formulas even in the presence of noise. Our first algorithm infers minimal LTL formulas by reducing the inference problem to a problem in maximum satisfiability and then using off-the-shelf MaxSAT solvers to find a solution. To the best of our knowledge, we are the first to incorporate the usage of MaxSAT solvers for inferring formulas in LTL. Our second learning algorithm relies on the first algorithm to derive a decision tree over LTL formulas based on a decision tree learning algorithm. We have implemented both our algorithms and verified that our algorithms are efficient in extracting concise LTL descriptions even in the presence of noise.
  • 14:49 – 14:56 Xuan Xie: Property-Directed Verification of Recurrent Neural Networks

    Abstract:

    In this presentation, I will present a property-directed approach to verifying recurrent neural networks (RNNs). To this end, we learn a deterministic finite automaton as a surrogate model from a given RNN using active automata learning. This model may then be analyzed using model checking as a verification technique. The term property-directed reflects the idea that our procedure is guided and controlled by the given property rather than performing the two steps separately. We show that this not only allows us to discover small counterexamples fast, but also to generalize them by pumping towards faulty flows hinting at the underlying error in the RNN.

Session 21: Poster session

designed and delivered