log in

17–20 SEPTEMBER 2019

10 June: Submission deadline
20 June: Notification
31 August: Early registration deadline
17 September: Tutorial day
18–20 September: Conference

edition

Warsaw

The goal of Highlights conferences is to integrate the community working on logic, games and automata.

IMPORTANT DATES

10 June AoE: Submission deadline
20 June: Notification
31 August: Early registration deadline
17 September 2019: Tutorial day
18–20 September 2019: Conference

PREVIOUS EDITIONS

  1. Highlights 2018 (Berlin, 18–21 September)
  2. Highlights 2017 (London, 12–15 September)
  3. Highlights 2016 (Brussels, 6–9 September)
  4. Highlights 2015 (Prague, 15–18 September)
  5. Highlights 2014 (Paris, 2–5 September)
  6. Highlights 2013 (Paris, 18–21 September)

No proceedings

There are no proceedings. You present your best work, be it published elsewhere or yet unpublished.

Short

The conference is three days long. The contributed talks are around ten minutes.

Cheap

The participation costs are modest. Warsaw is easy to reach.

Everybody is doing it!

Let's use the conference model that is so successful in other fields, like mathematics.

Registration

Registration

To register, click here.

The early registration deadline is August 31; later registration will incur additional cost.

Colocated event

School on Nominal Techniques — 3rd School on Foundations of Programming and Software Systems (FOPSS), 10-15 September

Committees

PROGRAM COMMITTEE

ORGANISING COMMITTEE

STEERING COMMITTEE

Venue

Location

All lectures will take place at the Warsaw University Library, which is located here. The address is:

Dobra 55/56, 00-312 Warsaw

The library is in walking distance from Warsaw University’s main Campus. The bus stop “Uniwersytet” at Krakowskie Przedmieście Street is frequented by multiple buses, including the bus line 175 connecting directly with Chopin Airport and the Central Railway Station. Also, the library is in walking distance from the metro station “Centrum Nauki Kopernik” on line M2.




Picnic

An informal picnic will take place on Thursday, September 19 at 7pm, on Vistula’s river banks, near the library. The participants are asked to bring something to drink and eat with them.

Invited Presentations

Invited Tutorials

  • Sebastian Siebertz, Nowhere dense graph classes and algorithmic applications
  • Guy Avni, Formal Methods Meets Algorithmic Game Theory

Invited Talks

  • Dmitry Chistikov, On the complexity of logics over the integers
  • Marie Fortin, Expressivity of first-order logic and star-free propositional dynamic logic over ordered structures

Invited Spotlight Talks

Upper and Lower bounds for Reachability in Petri Nets and Vector Addition Systems

  • Jérôme Leroux, Reachability in Vector Addition Systems is Not Elementary
  • Sylvain Schmitz, Reachability in Vector Addition Systems is Primitive-Recursive in Fixed Dimension

Program

17

September 17th, Tuesday: Tutorial Day

Registration

Tutorial 1 (Part 1)

  • Sebastian Siebertz: Nowhere dense graph classes and algorithmic applications

    Abstract:

    The notion of nowhere denseness was introduced by Nešetřil and Ossona de Mendez and provides a robust concept of uniform sparseness of graph classes. Nowhere dense graph classes generalize many familiar classes of sparse graphs such as the class of planar graphs and classes that exclude a fixed minor or topological minor. They admit several seemingly unrelated natural characterisations that lead to strong algorithmic applications. In particular, the model-checking problem for first-order logic is fixed-parameter tractable over these classes. In this tutorial I will give an introduction to the theory of nowhere dense graph classes with a focus on different characterisations and algorithmic applications.

coffee

Tutorial 1 (Part 2)

  • Sebastian Siebertz: Nowhere dense graph classes and algorithmic applications

    Abstract:

    The notion of nowhere denseness was introduced by Nešetřil and Ossona de Mendez and provides a robust concept of uniform sparseness of graph classes. Nowhere dense graph classes generalize many familiar classes of sparse graphs such as the class of planar graphs and classes that exclude a fixed minor or topological minor. They admit several seemingly unrelated natural characterisations that lead to strong algorithmic applications. In particular, the model-checking problem for first-order logic is fixed-parameter tractable over these classes. In this tutorial I will give an introduction to the theory of nowhere dense graph classes with a focus on different characterisations and algorithmic applications.

lunch break

Tutorial 2 (Part 1)

  • Guy Avni: Formal Methods Meets Algorithmic Game Theory

    Abstract:

    Many problems in computer science, math, and economics can naturally be described as a game. As a result, various sub-fields in these disciplines study different types of games. The tutorial describes useful and interesting recent connections between formal methods and algorithmic game theory (AGT, for short). The goal in formal methods is to formally reason about systems. The traditional games that are studied in formal methods are called graph games and they model the interaction between a reactive system and its hostile environment. These are typically ongoing two-player zero-sum games of infinite duration. On the other hand, AGT lies in the intersection between computer science and economics. The games studied in AGT are typically multi-player, non-zero-sum, and one-shot games. We will focus on two meeting points between the two fields. The first is “bidding games”, which are graph games in which in each turn, an auction is held in order to determine which player moves next. We study the effect of using different bidding mechanisms on the properties of the game, and reveal an intriguing connection between bidding games and a fragment of stochastic games called “random-turn games”. In the the second meeting point, we will focus on “network games” (sometimes called “congestion games”), and describe various ways of enriching studies of network games in the AGT community by practical considerations that are well-studied in the formal-method community. This includes, for example, network games with rich specifications, ones with real-time and ongoing behaviors, as well as algorithms for handling network games with huge state spaces.

coffee

Tutorial 2 (Part 2)

  • Guy Avni: Formal Methods Meets Algorithmic Game Theory

    Abstract:

    Many problems in computer science, math, and economics can naturally be described as a game. As a result, various sub-fields in these disciplines study different types of games. The tutorial describes useful and interesting recent connections between formal methods and algorithmic game theory (AGT, for short). The goal in formal methods is to formally reason about systems. The traditional games that are studied in formal methods are called graph games and they model the interaction between a reactive system and its hostile environment. These are typically ongoing two-player zero-sum games of infinite duration. On the other hand, AGT lies in the intersection between computer science and economics. The games studied in AGT are typically multi-player, non-zero-sum, and one-shot games. We will focus on two meeting points between the two fields. The first is “bidding games”, which are graph games in which in each turn, an auction is held in order to determine which player moves next. We study the effect of using different bidding mechanisms on the properties of the game, and reveal an intriguing connection between bidding games and a fragment of stochastic games called “random-turn games”. In the the second meeting point, we will focus on “network games” (sometimes called “congestion games”), and describe various ways of enriching studies of network games in the AGT community by practical considerations that are well-studied in the formal-method community. This includes, for example, network games with rich specifications, ones with real-time and ongoing behaviors, as well as algorithms for handling network games with huge state spaces.

18

September 18th, Wednesday: Day 1

Registration

Invited Talk 1

  • Dmitry Chistikov: On the complexity of logics over the integers

    Abstract:

    Given a system of linear Diophantine equations, how difficult is it to determine whether it has a solution? What changes if equations are replaced with inequalities? If some of the variables are quantified universally? These and similar questions relate to the computational complexity of deciding the truth value of statements in various logics. This includes in particular Presburger arithmetic, the first-order logic over the integers with addition and order.
    In this talk, I will survey constructions and ideas that underlie known answers to these questions, from classical results to recent developments, and open problems.
    First, we will recall the geometry of integer linear programming and how it interacts with quantifiers. This will take us from classical results due to von zur Gathen and Sieveking (1978), Papadimitriou (1981), and others to the geometry of the set of models of quantified logical formulas. We will look at rational convex polyhedra and their discrete analogue, hybrid linear sets (joint work with Haase (2017)), and see, in particular, how the latter form a proper sub-family of ultimately periodic sets of integer points in several dimensions (the semi-linear sets, introduced by Parikh (1961)).
    Second, we will discuss “sources of hardness”: which aspects of the expressive power make decision problems for logics over the integers hard. Addition and multiplication combined enable simulation of arbitrary Turing machines, and restriction of multiplication to bounded integers corresponds to resource-bounded Turing machines. How big can these bounded integers be in Presburger arithmetic? This leads to the problem of representing big numbers with small logical formulae, and we will see constructions by Fischer and Rabin (1974) and by Haase (2014). We will also look at the new “route” for expressing arithmetic progressions (in the presence of quantifier alternation) via continued fractions, recently discovered by Nguyen and Pak (2017).

coffee

Session 1: LOGICS

  • Laureline Pinault. Undecidability of MSO with Path-Measure Quantifier via Probabilistic Automata

    Abstract:

    We consider monadic second-order logic extended with the qualitative path-measure quantifier that is interpreted over the infinite binary tree. This quantifier says that the set of infinite paths in the tree satisfying a formula has Lebesgue-measure one. We prove that this logic is undecidable. Towards this we first show that the emptiness problem of qualitative universal parity tree automata is undecidable, where qualitative means that a run of a tree automaton is accepting if the set of paths in the run that satisfies the acceptance condition has Lebesgue-measure equal to one. This result also implies the undecidability of the emptiness problem of a recently introduced model of alternating probabilistic tree automata.
    This is a joint work with Raphaël Berthon, Nathanaël Fijalkow, Emmanuel Filiot, Shibashis Guha, Bastien Maubert, Aniello Murano, Jean-François Raskin, Sasha Rubin and Olivier Serre.
    It has been written but not published.
  • Szymon Toruńczyk. Aggregation, Enumeration, and Updates on Sparse Databases via Circuits

    Abstract:

    We propose an algebraic framework for studying efficient algorithms for query evaluation, aggregation, enumeration, and maintenance under updates, on sparse databases. Our framework allows to treat those problems in a unified way, by considering various semirings, depending on the considered problem. We propose two query languages extending first-order logic by forms of semiring aggregation. In the first language, a query is evaluated in a fixed semiring. For various considered semirings, we obtain efficient algorithms for the evaluation and maintenance of query outputs. By employing the provenance semiring, we obtain as an application a linear-time algorithm which inputs a first-order query and a sparse database, and computes a dynamic data structure which allows to enumerate the answers to the query with constant delay. The data structure can be updated in constant time, upon updates to the database that do not modify its Gaifman graph. Our second query language extends the first one by allowing aggregates from multiple semirings simultaneously. We show that such queries can be evaluated in linear time on sparse databases.
  • Stanislav Kikot. On Monotone Determinacy of Recursive Queries over Disjunctive Views

    Abstract:

    I’m going to talk about new results on monotone determinacy of
    recursive queries over views containing disjunction
    that have been recently obtained in a joint work with Michael
    Benedikt, Piotr Ostropolski-Nalewaja and Miguel Romero.
  • Steffen van Bergerem. Learning Concepts Definable in First-Order Logic with Counting

    Abstract:

    We study classification problems over relational background structures for hypotheses that are defined using logics with counting. The aim of this paper is to find learning algorithms that run in time sublinear in the size of the background structure. We show that hypotheses defined by FOCN(P)-formulas over structures of polylogarithmic degree can be learned in sublinear time. Furthermore, we show that for structures of unbounded degree there is no sublinear learning algorithm for first-order formulas.

    This paper has been accepted for publication to LICS 2019.
  • Gabriel Istrate. Proof complexity of Kneser-Lovasz formulas and other combinatorial principles: the role of kernelization

    Abstract:

    The work presented here combines two different sets of results:

    – First, a paper written together with James Aisenberg, Samuel Buss, Maria-Luisa Bonet and Adrian Craciun that has already appeared in Information and Computation in 2018.
    – Second, new, ongoing research with Adrian Craciun that extends and situates the contributions of the previous paper in the context of the notions of data reduction and kernelization from parameterized complexity.


    Our original motivation was the problem of investigating the proof-complexity of Kneser-Lovasz Tautologies. They are a family of propositional tautologies parameterized by an integer k that were introduced by Istrate and Craciun (SAT’2014), encoding a principle of combinatorial topology which generalizes the well-known pigeonhole principle. It was shown in our SAT’2014 paper that Kneser-Lovasz formulas have polynomial size Frege proofs in the case k=2 and extended Frege proofs for k=3. The complexity of the case k\geq 4 (for which only non-effective mathematical proofs based on algebraic topology were previously known) had been left open.

    We show that Kneser-Lovasz tautologies have quasipolynomial size Frege proofs and polynomial-size extended Frege proofs for all fixed values of k. On the other hand we show that the so-called Octahedral Tucker lemma (a mathematical result used to prove the Kneser-Lovasz theorem) has a polynomial-size miniaturization, the so-called “truncated Tucker lemma” that yields a class of propositional formulas for which no tractable Frege proofs are known.

    In new work, we show that the ingredients behind our results in the 2018 paper are proof-theoretic versions of the notions of data reduction and kernelization from parameterized complexity: we state a more general (meta)theorem on data reduction rules whose soundness can be witnessed by tractable Frege proofs that can be used to reinterpret our original results on Kneser-Lovasz tautologies, but can also be used to prove other new results.

    We give examples of the power of this metatheorem by giving a couple of applications, including results on the proof complexity of stable Kneser-Lovasz formulas, and of formulas encoding lower bounds on the vertex cover in general graphs.
  • Benjamin Lucien Kaminski. Quantitative Separation Logic – A Logic for Reasoning about Probabilistic Pointer Programs

    Abstract:

    We present quantitative separation logic (QSL). In contrast to classical separation logic, QSL employs quantities which evaluate to non-negative real numbers instead of predicates which evaluate to boolean values. The characteristic connectives of classical separation logic, separating conjunction (*) and separating implication (–*), are both lifted from predicates to quantities. This extension is conservative: Both connectives are backward compatible to their classical analogs and obey the same laws, e.g. modus ponens, adjointness of * and –*, etc.

    Furthermore, we develop a weakest precondition calculus for quantitative reasoning about probabilistic pointer programs in QSL. This calculus is a conservative extension of both Reynolds’ weakest preconditions for heap manipulating programs and McIver & Morgan’s weakest preexpectations for probabilistic programs. In particular, our calculus preserves the frame rule which enables local reasoning – a key principle of separation logic.

    Joint work with Kevin Batz, Joost-Pieter Katoen, Christoph Matheja, and Thomas Noll.
  • Eugenia Ternovska. A Logic of Information Flows

    Abstract:

    A challenge in descriptive complexity is to identify logics with low complexity that simultaneously express both fundamental reachability properties and counting properties on unordered structures. We present a family of logics that allow fine control of the complexity of order-independent computations. The approach is based on adding the ability to reason about information propagations in first-order logic with fixed points. Two parameters may be used to control expressiveness and complexity: the set of definable connectives, and the expressiveness of atomic units.
    We identify a fragment with polytime data complexity and show that it can express both reachability and counting properties on unordered structures. The fragment is less expressive than FO(FP) both in its FO part and its recursive construct, but is more expressive in its ability to constrain information flows by means of (definable) modalities, similar to Dynamic logic.
  • Arthur Milchior. MightyL: From Metric Interval Temporal Logic to Timed/Signal automata

    Abstract:

    Metric Interval Temporal Logic (MITL) was first proposed in the
    early 1990s as a specification formalism for real-time systems. MITL
    admit two semantics, depending on whether we consider that events
    occurs discretly or continuously in time. Alur, Feder and Henzinger,
    and Kini, Krishna and Pandya have proven that the emptiness problem
    is decidable for the discrete semantic and continuous semantic
    respectively. Both decidability results, given a MITL formula
    \phi, construct a timed or signal automaton, recognizing the
    language defined by \phi. The emptiness of automata can then be
    decided by enumerating the different configurations (regions or
    zones) which can be visited by the automata.

    In this talk, we’ll explain how we improved the computation time for
    the decidability problem. This was done by modifying the translation
    from MITL to automata, in order to reduce the number of region that
    those automata may potentially visits.

    This work was done conjointly with Thomas Brihaye, Gilles Geeraerts,
    Hsi-Ming Ho and Benjamin Monmege.

lunch break

Session 2a: POPULATION PROTOCOLS AND PROBABILISTIC SYSTEMS

  • Stefan Jaax. Succinct Population Protocols

    Abstract:

    Population models are a model of distributed computation by finite-state, anonymous and identical agents. Population protocols compute predicates in a population by reaching a lasting consensus through random pairwise interactions.
    A classical result establishes that population protocols compute exactly the predicates definable in Presburger arithmetic, the first-order theory of the natural numbers with addition. However, the best known upper bound for the number of states required per agent in a population protocol is exponential, where the predicate is given as quantifier-free formula with coefficients encoded in binary.

    We show that every Presburger predicate can be represented succinctly in population protocols: For every quantifier-free formula, there exists an equivalent population protocol where the number of states per agents is polynomial in the size of the formula.

    Our proof is constructive and may lead to space-efficient synthesis of multi-agent systems in low-resource environments, such as robot swarms, where each agent has very little memory at its disposal.


    Moreover, as opposed to a previous succinct construction for fragments of Presburger arithmetic presented at STACS’2018, our new construction does not need any additional agents besides the input agents.


    This is joint work with Michael Blondin and Javier Esparza.
  • Chana Weil-Kennedy. The Complexity of Verifying Observation Population Protocols.

    Abstract:

    Population protocols [Angluin et al., PODC, 2004] are a theoretical model of interactions between identical, limited-means, mobile devices. The model postulates a “soup” of indistinguishable finite-state agents that interact and change their states according to a joint transition function. Computations are infinite sequences of interactions satisfying a strong fairness constraint. Together, the agents of a population protocol compute information over their initial configuration, i.e. the number of agents in each state initially. In the general population protocol model, agents interact in pairs and this model was extensively studied [e.g. see survey in Bulletin of EATCS 2018]. Angluin et al. studied its expressiveness, and showed that any Presburger predicate can be calculated by such a protocol. Esparza et al. studied associated verification questions, such as the correctness problem: given a protocol and a predicate, does the protocol compute the predicate ? They showed that this problem is TOWER-hard.

    This prompted the investigation of restricted protocol models, that describe more limited interaction schemas but with possibly lower complexity in verification problems. In their seminal paper on population protocols, Angluin et al. introduced such restrictions to model certain types of one-way communication with asynchronous message-passing. In particular, they describe observation models, where an interaction is a receiver agent that observes the state or the message sent by a sender agent. They distinguish two models based on whether the send and receive event happen simultaneously or not, called immediate observation protocols and delayed observation protocols respectively. We study the correctness problem for such models, and establish complexity bounds which are at a (low) finite level of the polynomial hierarchy for delayed observation protocols, and PSPACE-complete for immediate observation protocols.

    This is joint work with Javier Esparza, Stefan Jaax and Mikhail Raskin. Part of the work will appear in a paper at Petri Nets 2019, and the rest is part of a journal article soon to be submitted. This work was funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme under grant agreement No 787367.
  • Antonin Kucera. Termination complexity of Parameterized Population Protocols

    Abstract:

    Population protocols form a model where identical finite-state agents interact to collectively decide whether their initial configuration, given by the initial number (x_1, \ldots, x_n) of agents in each state, satisfies a given property, e.g., 2 x_1 - x_2 \geq 0. A protocol is correct if the agents correctly decide the property for all of the infinitely many initial configurations. This makes automatic formal analysis challenging. Furthermore, the literature often describes \emph{parameterized families} of protocols, e.g., a family of protocols for the predicates a x_1+b x_2 \geq c. We describe a methodology for the automatic analysis of asymptotic running time of such families, based on abstraction techniques, and report on an implementation on top of a constraint-solver. Our results show that it is possible to check properties of \emph{infinitely many} protocols in finite time.

    The presentation is based on recent unpublished results achieved jointly with Michael Blondin, Javier Esparza, Stefan Jaax, and Philipp J. Meyer.
  • Catalin Dima. Model Checking Strategic Abilities in Information-sharing Agents

    Abstract:

    We study the formalism of visibly CGS (vCGS) which contains an explicit expression of private-data sharing in Multi-Agent System. In vCGS, each agent “a” has a syntactic/semantic endowment to temporarily allow som other agent “b” to see the value of some of a’s variables.
    We prove here the decidability of the model-checking problem for a fragment of ATL with imperfect information on a subclass of vCGS, called A-cast vCGS, where atom visibilities within a coalition A evolve simultaneously for all agents – that is, whenever some atom is made visible to some agent, it is visible to all agents in the coalition. On these, we prove that verification of the strategic abilities of coalition A is decidable, that is, for formulas which utilize only coalition A.
  • Jan Kretinsky. Semi-Quantitative Abstraction and Analysis of Stochastic Population Models

    Abstract:

    Analysis of large continuous-time stochastic systems is a computationally intensive task. In this work, we focus on population models arising from chemical reaction networks (CRNs), which play a fundamental role in the analysis and design of biochemical systems. Many relevant CRNs are particularly challenging for existing techniques due to complex dynamics including stochasticity, stiffness or multimodal population distributions. We propose a novel approach allowing not only to predict, but also to explain both the transient and steady-state behaviour. It focuses on qualitative description of the behaviour and aims at quantitative precision only in orders of magnitude. Firstly, we abstract the CRN into a compact model preserving rough timing information, distinguishing only significantly different populations, but capturing relevant sequences of behaviour. Secondly, we approximately analyse the most probable temporal behaviours of the model through the most probable transitions. As demonstrated on complex CRNs from literature, our approach reproduces the known results, but in contrast to the state-of-the-art methods, it runs with virtually no computational cost and thus offers unprecedented scalability.
  • Edoardo Pirovano. A Counter Abstraction Technique for the Verification of Probabilistic Swarm Systems

    Abstract:

    In my talk, I will be presenting joint work with Alessio Lomuscio, which has
    been published at AAMAS 2019. I will introduce a method for formally verifying
    properties of probabilistic multi-agent systems with a possibly unbounded
    number of agents. I will define our novel semantics, which is both
    parameterised (admitting an arbitrary number of agents) and stochastic. After
    observing that the verification problem against PLTL specifications is in
    general undecidable, I will introduce our partial decision procedure for the
    problem. This decision procedure will rely on constructing an abstract system
    (which captures the behaviour of a possibly unbounded number of agents) using
    a counter abstraction technique. I will show this technique to be sound but
    incomplete, and present our experimental implementation of this method,
    based on the probabilistic model checker PRISM. Finally, I will show the
    results we obtained when using our tool to analyse a foraging protocol from
    swarm robotics.
  • Julia Eisentraut. Stopping Criteria for Value and Strategy Iteration on Concurrent Stochastic Games

    Abstract:

    In this talk, I will present joint work with Jan Křetínský & Alexej Rotar on stopping criteria for value iteration and strategy iteration on concurrent stochastic games (CSG) played on graphs with reachability and safety objectives.

    In CSG, at every round of the game, each player simultaneously and independently chooses a move. The moves then jointly determine the transition taken, which leads to a probability distribution over states. Considering a safety objective for player S, its goal is to maximize the probability of staying within a given set of states while player R maximizes the probability to leave this set, which is its reachability objective.

    These games can be solved by algorithms based on dynamic programming, namely value iteration (VI) as well as strategy iteration (SI), each of them yielding a sequence of under-approximations of the reachability value – and thus, an over-approximation of the safety value -, converging to it in the limit. However, it has been unknown how the error can be bounded at any given moment.

    I will present algorithms to compute…
    1) …a sequence of values over-approximating the reachability value.
    2) …a sequence of strategies over-approximating the reachability value.

    Together with standard VI and SI, this allows us to define algorithms where the current error of the approximation can be bounded at any point in time yielding stopping criteria for both VI and SI.

Session 2b: PROGRAM VERIFICATION AND TREE AUTOMATA

  • Pierre Clairambault and Andrzej Murawski. On the expressivity of linear recursion schemes

    Abstract:

    We investigate the expressive power of higher-order recursion schemes (HORS)
    restricted to linear types. Two formalisms are considered: multiplicative
    additive HORS (MAHORS), which feature both linear function types and products,
    and multiplicative HORS (MHORS), based on linear function types only.

    For MAHORS, we establish an equi-expressivity result with a variant of
    tree-stack automata. Consequently, we can show that MAHORS are strictly more
    expressive than first-order HORS, that they are incomparable with second-order
    HORS, and that the associated branch languages lie at the third order of the
    collapsible pushdown hierarchy.

    In the multiplicative case, we show that MHORS are equivalent to a special
    kind of pushdown automata. It follows that any MHORS can be translated to an
    equivalent first-order MHORS in polynomial time. Further, we show that MHORS generate
    regular trees and can be translated to equivalent order-0 HORS in exponential time.
    Consequently, MHORS turn out to have the same expressive power as 0-HORS but they can
    be exponentially more concise.

    Our results are obtained through a combination of techniques from game semantics, the
    geometry of interaction and automata theory.
  • Peter Chini, Roland Meyer and Prakash Saivasan. Parameterized Complexity of Program Verification Tasks

    Abstract:

    Program verification is hard. Decision problems arising from verification tasks usually do not admit satisfying worst-case complexities. On the other hand, program verification can be efficient. Verification tools developed in recent years perform well even on large systems. To resolve this discrepancy between theory and practice, we picked up a recent trend in complexity theory: Parameterized Complexity. Unlike classical complexity theory that relates the complexity to the size of the instance, Parameterized Complexity measures the influence of several parameters on a problem’s complexity. This offers a more fine-grained view. We can separate those parameters of a problem that allow for the construction of efficient algorithms from those that cause the hardness.

    We conducted parameterized complexity analyses of typical verification tasks.
    The main findings are as follows.

    (1) A new algorithm (ESA’17) for Bounded Context Switching, non-polynomial only in the number of context switches and the size of the memory.
    (2) Two new algorithms (TACAS’18) for safety verification in the parameterized Leader-Contributor model (LCM).
    This includes an algorithm running in time single-exponential in the size of the contributors.
    (3) Two new algorithms (submitted) for liveness verification in LCM, showing that checking safety and liveness only differ by a polynomial factor.
    (4) A new polynomial-time algorithm (NETYS’19) for liveness in broadcast networks.

    To obtain these upper bounds, we employ techniques from both fields, Parameterized Complexity and verification. Further, we show that the given algorithms are optimal: the verification tasks cannot be solved more efficient unless the Exponential Time Hypothesis fails, a standard assumption in Parameterized Complexity.
  • Mehran Hosseini, Joel Ouaknine and James Worrell. Termination of Affine Loops over the Integers

    Abstract:

    We consider the problem of deciding termination of single-path while loops with integer variables, affine updates, and affine guard conditions. The question is whether such a loop terminates on all integer initial values. This problem is known to be decidable for the subclass of loops whose update matrices are diagonalisable,
    but the general case has remained open since being conjectured decidable by Tiwari in 2004. In this paper we show decidability of determining termination for arbitrary update matrices, confirming Tiwari’s conjecture. For the class of loops considered in this paper, the question of deciding termination on a single initial value is a longstanding open problem in number theory. The key to our decision procedure is in showing how to circumvent the difficulties inherent in deciding termination on a single initial value.
  • Paul Brunet. Pomsets with boxes: towards Atomic CKA.

    Abstract:

    Concurrent Kleene Algebra (CKA) was introduced by Hoare, Moeller, Struth and Wehrman in 2009 as a compositional framework to reason about concurrent programs. Their free model is given by series(-parallel) rational pomset languages, a standard true concurrency semantics. A key feature of CKA is provided by the weak exchange law, allowing one to partially interleave threads. For instance this enables one to consider that a specification that allows for three parallel threads can be refined in an implementation on two processors, provided all the dependencies required in the specification are satisfied.

    However, when trying to use this framework to model realistic programs, this law is often too permissive. By allowing every possible interleaving, one cannot for example provide mutual exclusion guarantees. To try and address this issue while retaining the good properties of CKA (compositionality, decidability, applicability…) we modify the underlying model of pomsets to include so-called “boxes”. Boxes intuitively correspond to an “atomic{…}” construct in a parallel programming language: they isolate part of the program from interference from the rest of the concurrent environment.

    In future work, we plan to develop logics — in the style of van Benthem, Hennessy, and Milner — for describing the properties of CKA models. Here we identify three key challenges: first, to establish appropriate operational/algebraic—logical equivalence theorems; second, to establish logical characterizations of the role of boxes in specifying behaviour; third, identification of local reasoning principles, such as ‘frame rules’, that will support compositional formal reasoning about properties of CKA models.

    In this talk, I will present this new model of pomsets with boxes, together with semantic and axiomatic ways of reasoning about them. This is joint work with David Pym, and all the results have been verified using the proof assistant Coq.
  • Patrick Landwehr. Tree Automata with Global Constraints for Infinite Trees

    Abstract:

    We study an extension of standard tree automata on infinite trees that allows for the use of global constraints.
    Such a tree automaton with global equality and disequality constraints can test (dis)equality between subtrees which may be arbitrarily far away. In particular these kind of automata can check that all subtrees for which in the run a state q is reached (at the root of that subtree) are identical, or that these trees differ from the subtrees at which a state q' is reached.

    We formally introduce a general model of such automata and briefly tackle its closure properties. Then we show the decidability of the emptiness problem for the case that the given automaton only uses equality constraints.
  • David Barozzini. Cost Logics, Safe Schemes, and Downward Closures

    Abstract:

    Higher order recursion schemes are a powerful formalism used to de- fine languages of possibly infinite ranked trees. They extend regular and context-free grammars, and are equivalent to simply typed λY-calculus and collapsible pushdown automata.
    In this work we prove that, under a syntactical constraint called safety, the model checking problem for schemes against properties defined by B- quasi-weak automata is decidable. We then exploit this result to show how to compute downward closures of languages of finite trees recognized by safe schemes.
  • Alexander Rabinovich and Doron Tiferet. Degrees of Ambiguity of Büchi Tree Automata

    Abstract:

    An automaton is unambiguous if for every input it has at most one accepting computation. An automaton is finitely (respectively, countably) ambiguous if for every input it has at most finitely (respectively, countably) many accepting computations. An automaton is bounded ambiguous if there is k ∈ N, such that for every input it has at most k accepting computations. In this presentation we consider nondeterministic Büchi automata (NBA) over infinite trees, and prove that it is decidable in polynomial time whether an automaton is unambiguous, bounded ambiguous, finitely ambiguous, or countable ambiguous.

coffee

Session 3: Transducers

  • Mikolaj Bojanczyk. Polyregular functions

    Abstract:

    I will describe a class of string-to-string functions, called the polyregular functions.
    These functions admit numerous equivalent descriptions: (a) automata with pebbles; (b) a fragment
    of lambda calculus; (c) a fragment of for-programs; (d) compositions of
    certain atomic functions, such as reverse or a form of string squaring; and (e) string-to-string
    interpretations definable in monadic second-order logic.
  • Sougata Bose. Origin-equivalence and resynchronizers for two-way transducers

    Abstract:

    In the origin semantics the output symbols produced by a transducer are annotated with information about the input position responsible for producing those symbols. We show first that the origin semantics allows to recover decidability of equivalence and containment for non-deterministic two-way transducers — whereas these problems are known to be undecidable in the classical semantics. We consider a variant of the containment problem where two two-way transducers are compared under the origin semantics, but the origins can be modified by some given resynchronization relation. Resynchronizations for one-way transducers were proposed by Filiot et al. in 2016. We introduce MSO-resynchronizations, and show that they capture resynchronizations for one-way transducers. Containment of two-way transducers up to bounded MSO-resynchronizations turns out to be reducible to containment under origin semantics, so it is decidable. We also consider the problem of synthesizing a resynchronization which makes two classically equivalent transducers origin-equivalent. We show that this problem is undecidable in general, but decidable for functional transducers.

    This talk is based on joint works with Anca Muscholl, Vincent Penelle, Gabriele Puppis and Krishna S.
  • Jan Martens. Resynchronizability of one-way transducers is undecidable

    Abstract:

    Origin semantics introduced by Bojańczyk are a fine grained semantics for transducers, that not only expresses the relation on input and output words, but also includes a function that given an output position returns the input position where it was produced: the origin. Two transducers that are equivalent with respect to the classical semantics may differ in the origin semantics, if they produce their output in different ways. Considering this finer origin semantics instead of the classical one allows to recover the decidability of equivalence, as well as machine-independent characterizations of regular transductions, among other properties. Resynchronizations as discussed in [1] express a similarity for different origin semantics that have the same underlying relation. This corresponds to a relaxation of the strict origin semantics, allowing a MSO-definable distortion. In [1], it is shown that containment up to a fixed resynchronization is decidable in PSPACE. We study the notion of containment up to some unknown resynchronization. We show that this relation forms a preorder, and that it is undecidable already in the case of one-way non-deterministic transducers. To do so we introduce the notion of desynchronized blocks, witnessing the absence of resynchronization.

    This is joint work with Denis Kuperberg.

    References:
    [1] Sougata Bose, Anca Muscholl, Vincent Penelle, and Gabriele Puppis. Origin-equivalence of
    two-way word transducers is in PSPACE. In IARCS Annual Conference on Foundations of
    Software Technology and Theoretical Computer Science (FSTTCS’18), volume 122 of LIPIcs,
    pages 1–18. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2018.
  • Anca Muscholl. Equivalence of finite-valued streaming string transducers is decidable

    Abstract:

    We show that the equivalence problem for finite-valued streaming string transducers is decidable, thus solving a question left open in an ICALP’11 paper by Alur and Deshmukh. We follow a proof idea due to Culik and Karhumäki, that uses Ehrenfeucht’s conjecture. Our proof is much more involved, because streaming string transducers produce their outputs piecewise, in contrast to one-way or two-way transducers, that produce their output linearly while reading the input. The solution is to come up with a suitable normalization procedure for SST, based on some (mild) word combinatorics and word equations.

    The paper will be presented at ICALP’19, and it is co-authored with Gabriele Puppis.
  • Janusz Schmude. Deciding Transducer Equivalence via Simulating an Algebra With a Ring of Polynomials

    Abstract:

    Register automata (RA) over a given algebra is an expressive class of transducers; it covers Streaming String Transducers (SST) in case of free monoid and Streaming Tree Transducers (STT) in case of ordered forest algebra. It is well known that equivalence is decidable for both SST and STT. In this talk we present a proof of decidability of equivalence of register automata over other algebras by simulating those algebras with a ring of polynomials (see Hilbert Method). We conclude with an undecidability result in case of ring of polynomials additionally equipped with substitution operation.

    This is joint work with Adrien Boiret and Radosław Piórkowski.
  • Amina Doumane. First-order tree-to-tree functions

    Abstract:

    We define a class of functions, called first-order tree-to-tree functions,
    which manipulates objects such as trees, trees of trees, pairs of trees, etc.

    The definition is in the style of regular expressions: we start with some basic functions, and more complex functions are constructed using some combinators (most importantly, composition).

    Our main result is that first-order tree-to-tree functions are exactly the same as first-order tree-to-tree transductions.
  • Adrien Boiret. Deciding equivalence on symbolic DTop with lookahead

    Abstract:

    Symbolic tree transducers are programs that transform data trees with an infinite signature.
    The equivalence problem of deterministic symbolic top-down tree transducers (DTop) can be reduced to that of classical DTop.
    Assuming that all operations related to the processing of data values are in PTime,
    the classical and symbolic equivalence problem are of similar complexities.
    This result is easily extended to symbolic DTop with finite lookahead and thus to deterministic symbolic bottom-up tree transducers.
    However, contrary to the finite alphabet case, a finite lookahead is not enough to encompass all the expressive power of extended transducers, i.e.transducers with more than one symbol of pattern matching.
    We present ongoing work on the more reasonable definitions of a class of DTop with symbolic lookahead, both more expressive than extended DTop and with efficient reductions to the classical case.
    Joint work with Vincent Hugot, Joachim Niehren.
  • Ramanathan Thinniyam Srinivasan. Regular Separability and Intersection Emptiness are Independent Problems

    Abstract:

    The problem of regular separability asks, given two languages K and L, whether there exists a regular language S with K \subseteq S and S \cap L= \emptyset. This problem has recently been studied for various classes of languages. All the results on regular separability obtained so far exhibited a noteworthy correspondence with the intersection emptiness problem: In each case, regular separability is decidable if and only if intersection emptiness is decidable. This raises the question whether under mild assumptions, regular separability can be reduced to intersection emptiness and vice-versa. We present counterexamples showing that none of the two problems can be reduced to the other. More specifically, we describe language classes C_1, D_1, C_2, D_2 such that (i) intersection emptiness is decidable for C_1 and D_1, but regular separability is undecidable for C_1 and D_1 and (ii) regular separability is decidable for C_2 and D_2, but intersection emptiness is undecidable for C_2 and D_2.

Break

Session 4: PARITY GAMES AND GAMES WITH FINITE MEMORY

  • Mickael Randour. Games Where You Can Play Optimally With Finite Memory

    Abstract:

    Two-player games provide a natural framework for controller synthesis, with well-known connections with logic and automata. A seminal result in the study of two-player (turn-based) games on (finite) graphs is due to Gimbert and Zielonka [1], who established a complete characterization of preference relations (and consequently, winning objectives) for which players can play optimally with memoryless strategies, i.e., without looking at the past at all. Almost all simple objectives in the literature fall in this class and thus yield memoryless determined games. Still, the recent appearance of more complex objectives and, above all, the increasing resort to multi-objective games (often required to accurately account for trade-offs between several qualitative and quantitative aspects in real-world applications) has led to games where the players do need to use memory, which can be finite or even infinite in some cases (e.g., multi-mean-payoff games). In this talk, I will present ongoing research, where we tackle the question of finite-memory strategies in a flavor similar to Gimbert and Zielonka’s approach of memoryless strategies. We will see that obtaining a perfect equivalent of Gimbert and Zielonka’s results is out of the question, as counter-examples exist; and we will describe how we circumvent the related obstacles to achieve a general characterization of preference relations for which finite memory suffice to play optimally. My talk will focus on examples and key concepts to highlight the core challenges and fundamental differences with the memoryless case. This talk is based on joint work with Patricia Bouyer, Stéphane Le Roux, Youssouf Oualhadj and Pierre Vandenhove.

    [1] Hugo Gimbert, Wieslaw Zielonka: Games Where You Can Play Optimally Without Any Memory. CONCUR 2005: 428-442.
  • Karoliina Lehtinen. From Parity to Büchi with univesal trees.

    Abstract:

    Automata translations are instrumental to understanding the trade-offs between different automata types. Better translations can yield better algorithms for checking emptiness and containment, while lower bounds show differences in the power of different automata models. Translations of alternating parity word automata into automata with simpler acceptance conditions, such as Büchi or weak, are particularly interesting as such a translation implies a solution to parity games of which the complexity matches the blow-up in the state-space of the automaton.

    I will present a new translation of alternating parity automata on infinite words to alternating weak automata. The blow-up of the number of states is related to the size of the smallest universal ordered trees and hence it is quasi-polynomial, and only polynomial if the asymptotic number of priorities is logarithmic in the number of states.
    This is an exponential improvement on the translation of Kupferman and Vardi (2001) and a quasi-polynomial improvement on the translation of Boker and Lehtinen (2018).

    This is joint work with Laure Daviaud and Marcin Jurdzinski, of which a pre-print is available here: https://arxiv.org/abs/1903.12620 .
  • Paweł Parys. Parity Games: Zielonka’s Algorithm in Quasi-Polynomial Time

    Abstract:

    Calude, Jain, Khoussainov, Li, and Stephan (2017) proposed a quasi-polynomial-time algorithm solving parity games. After this breakthrough result, a few other quasi-polynomial-time algorithms were introduced; none of them is easy to understand. Moreover, it turns out that in practice they operate very slowly. On the other side there is the Zielonka’s recursive algorithm, which is very simple, exponential in the worst case, and the fastest in practice. We combine these two approaches: we propose a small modification of the Zielonka’s algorithm, which ensures that the running time is at most quasi-polynomial. In effect, we obtain a simple algorithm that solves parity games in quasi-polynomial time. We also hope that our algorithm, after further optimizations, can lead to an algorithm that shares the good performance of the Zielonka’s algorithm on typical inputs, while reducing the worst-case complexity on difficult inputs.
  • Clément Tamines. Partial Solvers for Generalized Parity Games

    Abstract:

    Parity games have been broadly studied in recent years for their applications to controller synthesis and verification. In practice, it has been shown that partial solvers for parity games that execute in polynomial time, while incomplete, can solve most games in publicly available benchmark suites. We combine those partial solvers with the classical Zielonka algorithm to obtain an algorithm that is efficient in practice and complete. We also show how to extend partial solvers to generalized parity games which are games with conjunction of parity objectives or disjunction of parity objectives. We also show how to combine those partial solvers with a generalization of the Zielonka algorithm for generalized parity games. We have implemented those algorithms and evaluated them on a large set of benchmarks.

    This is an ongoing joint work with Véronique Bruyère, Guillermo A. Perez and Jean-François Raskin.
  • Tom van Dijk. Distractions in Parity Games

    Abstract:

    Parity games are infinite games between two players on a finite directed colored graph where the winner of each infinite path is determined by the parity of the highest color. They are conceptually simple but expressive enough to capture the power of least and greatest fixed points, as evidenced by the tight linear relationship of parity games to the model checking problem of the modal mu-calculus. Inspired by one could say the rather silly approach (but in practice with surprising superior speed) of solving a parity game by naively evaluating the corresponding mu-calculus formula, we propose the concept of a distraction. Using this new concept, we now find a fundamental distinction between two classes of parity game algorithms, namely those based on two-player attractor computation, and those based on increasing value along progressive paths. Finally, we provide directions for future research aiming at bridging this fundamental distinction.

19

September 19th, Thursday: Day 2

Registration

Invited Spotlight Talks: Reachability in Petri Nets and Vector Addition Systems

  • Jerome Leroux. Reachability in Vector Addition Systems is Not Elementary

    Abstract:

    Vector addition systems, also known as Petri nets, are a long established model of concurrency with extensive applications in modelling and analysis of hardware, software and database systems, as well as chemical, biological and business processes. The central algorithmic problem for vector addition systems is reachability: whether from the given initial configuration there exists a sequence of valid execution steps that reaches the given final configuration. The complexity of the problem has remained unsettled since the 1960s, and it is one of the most prominent open questions in the theory of verification. Decidability was proved by Mayr in his seminal STOC 1981 work, and the currently best published upper bound is non-primitive recursive Ackermannian of Leroux and Schmitz from LICS this year. We establish a non-elementary lower bound: the reachability problem needs a tower of exponentials of time and space. Until this work, the best lower bound has been exponential space, due to Lipton in 1976. The new lower bound is a major breakthrough for several reasons. Firstly, it shows that the reachability problem is much harder than the coverability (i.e., state reachability) problem, which is also ubiquitous but has been known to be complete for exponential space since the late 1970s. Secondly, it implies that a plethora of problems from formal languages, logic, concurrent systems, process calculi and other areas, that are known to admit reductions from the vector addition systems reachability problem, are also not elementary.
  • Sylvain Schmitz. Reachability in Vector Addition Systems is Primitive-Recursive in Fixed Dimension

    Abstract:

    The reachability problem in vector addition systems is a central question, not only for the static verification of these systems, but also for many inter-reducible decision problems occurring in various fields. The best known upper bound on this problem was not primitive-recursive, even when considering systems of fixed dimension.
    In a joint work with Jérôme Leroux presented at LICS’19, we provide significant refinements to the classical decomposition algorithm of Mayr, Kosaraju, and Lambert and to its termination proof, which yield an ACKERMANN upper bound in the general case, and primitive-recursive upper bounds in fixed dimension. While this does not match the currently best known TOWER lower bound for reachability, it is optimal for a related problem: the downwards language inclusion problem.
    An arXiv preprint is available from arxiv.org/abs/1903.08575

coffee

Session 5: VECTOR ADDITION SYSTEMS

  • Jan Otop. Long-Run Average Behavior of Vector Addition Systems with States

    Abstract:

    A vector addition system with states (VASS) consists of a finite set of states and counters.
    A configuration is a state and a value for each counter;
    a transition changes the state and each counter is incremented, decremented, or left unchanged.
    While qualitative properties such as state and configuration reachability have been studied for VASS,
    we consider the long-run average cost of infinite computations of VASS.
    The cost of a configuration is for each state, a linear combination of the counter values.
    In the special case of uniform cost functions, the linear combination is the same for all states.
    The (regular) long-run emptiness problem is, given a VASS, a cost function, and a threshold value,
    if there is a (lasso-shaped) computation such that the long-run average value of the cost function does not exceed the threshold.
    For uniform cost functions, we show that the regular long-run emptiness problem is
    (a)~decidable in polynomial time for integer-valued VASS, and
    (b)~decidable but nonelementarily hard for natural-valued VASS (i.e., nonnegative counters) and non-zero cost functions.
    For general cost functions, we show that the problem is
    (c)~NP-complete for integer-valued VASS, and
    (d)~undecidable for natural-valued VASS.
    Our most interesting result is for (c) integer-valued VASS with general cost functions,
    where we establish a connection between the regular long-run emptiness problem and quadratic Diophantine inequalities.
    The general (nonregular) long-run emptiness problem is equally hard as the regular problem in all cases except (c),
    where it remains open.

    This is a joint work with Krishnendu Chatterjee and Thomas A. Henzinger.vector addition systems
    mean-payoff
    Diophantine inequalities
  • Wojciech Czerwiński. An Approach to Regular Separability in Vector Addition Systems

    Abstract:

    We study the problem of regular separability of languages of vector addition systems with states (VASS). It asks whether for two given VASS languages K and L, there exists a regular language R that includes K and is disjoint from L. While decidability of the problem in full generality remains an open question, there are several subclasses for which decidability has been shown.

    We propose a general approach to deciding regular separability. We use it to obtain decidability for two subclasses. The first is regular separation of general VASS languages from languages of one-dimensional VASS. The second is regular separation of general VASS languages from integer VASS languages. Together, these two results generalize several of the previous decidability results for subclasses.
  • Michał Pilipczuk. Reachability for BoBrVASS

    Abstract:

    Bounded Vector Addition Systems with States (Bounded VASS) are a variant of the classic VASS model where all values in all configurations are upper bounded by a fixed natural number, encoded in binary in the input. This model gained a lot of attention in 2012 when Haase et al. showed its connections with timed automata. Later in 2013 Fearnley and Jurdziński proved that the reachability problem in this model is PSPACE-complete even in dimension 1.

    In the talk we will discuss the complexity of the reachability problem when the model is further extended with branching transitions; we call this new model BoBrVASS, which stands for Bounded Branching VASS. We will sketch the proof that the problem is EXPTIME-complete when the dimension is 2 or larger, leaving the case of dimension 1 as an interesting open problem.

    The talk will be based on a manuscript co-authored with Filip Mazowiecki (LaBRI, Université de Bordeaux), available as a pre-print on arxiv: https://arxiv.org/abs/1904.10226
  • Radosław Piórkowski. Pumping in 2-VASS – geometry of runs

    Abstract:

    I will present a new pumping technique for 2-dimensional vector addition systems with states (2-VASS) building on natural geometric properties of runs. It is expected to be useful for settling questions concerning languages of 2-VASS, e.g., for establishing decidability status of the regular separability problem. I will show how one may apply it to reprove an exponential bound on the length of the shortest accepting run, and to prove a new pumping lemma for languages of 2-VASS.

    Presentation is based on a paper by Wojciech Czerwiński, Sławomir Lasota, Christof Löding and Radosław Piórkowski.
  • Dominik Velan. Deciding Fast Termination for Probabilistic VASS with Nondeterminism

    Abstract:

    A probabilistic vector addition system with states (pVASS) is a finite state Markov process augmented with non-negative integer counters that can be incremented or decremented during each state transition, blocking any behaviour that would cause a counter to decrease below zero. The pVASS can be used as abstractions of probabilistic programs with many decidable properties. The use of pVASS as abstractions requires us the presence of nondeterminism in the model. In this paper, we develop techniques for checking fast termination of pVASS with nondeterminism.
    That is, for every initial configuration of size n, we consider the worst expected number of transitions needed to reach a configuration with some counter negative (the expected termination time). We show that the problem whether the asymptotic expected termination time is linear is decidable in polynomial time for a certain natural class of pVASS with nondeterminism. Furthermore, we show the following dichotomy: if the asymptotic expected termination time is not linear, then it is at least quadratic.

    This is a joint work with Tomáš Brázdil, Krishnendu Chatterjee, Antonín Kučera, and Petr Novotný. It was submitted to ATVA 2019.

lunch break

Session 6: MIXED TOPICS

  • Johannes Doleschal. Split-Correctness in Information Extraction

    Abstract:

    Programs for extracting structured information from text, namely information extractors, often operate separately on document segments obtained from a generic splitting operation such as sentences, paragraphs, k-grams, HTTP requests, and so on. Automated detection of this behavior of extractors, which we refer to as “split-correctness” would allow text analysis systems to devise query plans with parallel evaluation on segments for accelerating the processing of large documents. Other applications include the incremental evaluation on dynamic content, where re-evaluation of information extractors can be restricted to revised segments and debugging, where developers of information extractors are informed about potential boundary crossing of different semantic components.
    We propose a new formal framework for split-correctness within the formalism of document spanners. Our preliminary analysis studies the complexity of split-correctness over regular spanners. We also discuss different variants of split-correctness, for instance, in the presence of black-box extractors with “split constraints”.
  • Petr Jancar. Bisimulation Equivalence of First-Order Grammars is ACKERMANN-Complete

    Abstract:

    The proposed presentation is based on the joint paper with Sylvain Schmitz accepted to LiCS 2019, with the following abstract.

    Checking whether two pushdown automata with restricted silent actions are weakly bisimilar was shown decidable by Senizergues (1998, 2005). We provide the first known complexity upper bound for this famous problem, in the equivalent setting of first-order grammars. This ACKERMANN upper bound is optimal, and we also show that strong bisimilarity is primitive-recursive when the number of states of the automata is fixed.

    A full version can be found at arxiv.org/abs/1901.07170.
  • Jędrzej Kołodziejski. Bisimulational Categoricity

    Abstract:

    I will present a result which says that for a complete modal theory t (i.e. a maximal consistent set of modal logic formulae), the following conditions are equivalent: (i) t has a unique model up to bisimulation; (ii) every model of t is bisimilar to a finitely branching one; (iii) t has a finitely branching model. I will further discuss examples (such as the EF-logic) and non-examples (e.g. the modal mu-calculus) of logics for which similar characterisation is possible.
  • Wojtek Jamroga. Some Things are Easier for the Dumb and the Bright Ones (Beware of the Average!)

    Abstract:

    Model checking strategic abilities in multi-agent systems is hard, especially for agents with partial observability of the state of the system. In that case, it ranges from NP-complete to undecidable, depending on the precise syntax and the semantic variant. That, however, is the _worst case complexity_, and the problem might as well be easier when restricted to particular subclasses of inputs.

    In this paper, we study some natural restrictions on models, that might lead to cheaper verification. More specifically, we look at models with “extreme” epistemic structure, arising when the agents have almost nil, or, symmetrically, almost perfect observability. A sensor observing only one variable, with a fixed number of possible values, provides a natural example of the former type. For the latter class, consider a central controller monitoring a team of robots, with only a fixed number of units being unavailable at a time.

    It turns out that, when we consistently pair those restrictions with the assumptions about agents’ memory (i.e., assume almost perfect observability and perfect recall, or almost nil observability and no recall), model checking can become easier than in general. This applies especially to the verification of abilities of singleton coalitions. We also show that no gain is possible for the other combinations. To prove the latter kind of results, we develop generic techniques that may be useful also outside of this study.

    The paper has been accepted for the 28th International Joint Conference on Artificial Intelligence IJCAI 2019. It reports joint work with Michal Knapik.
  • Vadim Malvone. Reasoning about Natural Strategic Ability

    Abstract:

    In game theory, as well as in the semantics of game logics, a strategy can be represented by any function from states of the game to the agent’s actions. That makes sense from the mathematical point of view, but not necessarily in the context of human behavior. This is because humans are quite bad at executing complex plans, and also rather unlikely to come up with such plans in the first place. In this work, we adopt the view of bounded rationality, and look only at “simple” strategies in specifications of agents’ abilities. We formally define what “simple” means, and propose a variant of alternating-time temporal logic that takes only such strategies into account. We also study the model checking problem for the resulting semantics of ability.



    This is a joint work with Wojtek Jamroga and Aniello Murano.
  • Bastien Maubert. Reasoning about Quality and Fuzziness of Strategic Behaviours

    Abstract:

    Temporal logics are extensively used for the specification of on-going behaviours of reactive systems. Two significant developments in this area are the extension of traditional temporal logics with modalities that enable the specification of on-going strategic behaviours in multi-agent systems, and the transition of temporal logics to a quantitative setting, where different satisfaction values enable the specifier to formalise concepts such as certainty or quality. We introduce and study SL[F] — a quantitative extension of SL (Strategy Logic), one of the most natural and expressive logics describing strategic behaviours. The satisfaction value of an SL[F] formula is a real value in [0,1], reflecting “how much” or “how well” the strategic on-going objectives of the underlying agents are satisfied. We demonstrate the applications of SL[F] in quantitative reasoning about multi-agent systems, by showing how it can express concepts of stability in multi-agent systems, and how it generalises some fuzzy temporal logics. We also provide a model-checking algorithm for our logic, based on a quantitative extension of Quantified CTL.
  • Sławomir Lasota. Definable isomorphism problem

    Abstract:

    We investigate the isomorphism problem in the setting of definable sets (equivalent to sets with atoms): given two definable relational structures, are they related by a definable isomorphism?
    Under mild assumptions on the underlying structure of atoms, we prove decidability of the problem.
    The core result is parameter-elimination: existence of an isomorphism definable with parameters implies existence of an isomorphism definable without parameters.

    The result has been obtained in a joint work with Khadijeh Keshvardoost, Bartek Klin, Joanna Ochremiak and Szymon Torunczyk.
  • Oleg Verbitsky. Identifiability of Graphs with Small Color Classes by the Weisfeiler-Leman Algorithm

    Abstract:

    As shown by Immerman and Lander, every vertex-colored graphs with color multiplicity
    3 is definable in 3-variable first-order logic. We give an efficient decision
    procedure that, given a graph G with color multiplicity 4, recognizes whether or not
    G is definable in 3-variable logic with counting quantifiers. A more detailed
    abstract is uploaded as a PDF file.

coffee

Session 7a: BRACKETS, PETRI NETS, AND TIMED AUTOMATA

  • Dmitry Chistikov. Re-pairing brackets

    Abstract:

    Consider the following one-player game. Take a well-formed sequence of opening and closing brackets.
    As a move, the player can \emph{pair} any opening bracket with any closing bracket to its right, \emph{erasing} them.
    The goal is to re-pair (erase) the entire sequence, and the cost of a strategy is measured by its \emph{width:}
    the maximum number of nonempty segments of symbols (separated by blank space) seen during the play.

    For various initial sequences, we prove upper and lower bounds on the minimum width sufficient for re-pairing.
    (In particular, the sequence associated with the complete binary tree of height n admits a strategy of width sub-exponential in \log n.)
    Our two key contributions are
    (1) lower bounds on the width and
    (2) their application in automata theory:
    quasi-polynomial lower bounds on the translation from one-counter automata to Parikh-equivalent nondeterministic finite automata.
    The latter result answers a question by Atig et al. (LICS 2016).

    Joint work with Mikhail Vyalyi (Higher School of Economics, Moscow, Russia).
  • Olivier Finkel. Wadge Degrees of \omega-Languages of Petri Nets

    Abstract:

    We prove that \omega-languages of (non-deterministic) Petri nets and \omega-languages of (non-deterministic) Turing machines have the same topological complexity: the Borel and Wadge hierarchies of the class of \omega-languages of (non-deterministic) Petri nets are equal to the Borel and Wadge hierarchies of the class of \omega-languages of (non-deterministic) Turing machines which also form the class of effective analytic sets. In particular, for each non-null recursive ordinal \alpha < \omega_1^{{\rm CK}} there exist some {\bf \Sigma}^0_\alpha -complete and some {\bf \Pi}^0_\alpha-complete \omega-languages of Petri nets,
    and there also exist some \omega-languages of Petri nets which are Borel of non-recursive Borel ranks.

    We also prove that there are some {\bf \Sigma}_1^1-complete, hence non-Borel, \omega-languages of Petri nets, and that it is consistent with ZFC that there exist some \omega-languages of Petri nets which are neither Borel nor {\bf \Sigma}_1^1-complete.

    Preprint ArXiv:1712.07945

  • Michał Skrzypczak. On the strength of non-determinism for Buchi VASS

    Abstract:

    The motivating question is to understand the strength of non-determinism of Vector Addition Systems with States reading infinite words as the input. A special emphasis is put on the variants with limited number of dimensions (i.e. counters) and on unambiguous models. As it turns out, the non-determinism is inherent even for one-dimensional models. On the other hand, unambiguous models admit certain variant of determinisation construction.
  • Piotr Hofman. Continuous Reachability for Unordered Data Petri Nets is in PTime

    Abstract:

    Unordered data Petri nets (UDPN) are an extension of classical Petri nets with tokens that carry data from an infinite domain and where transitions may check equality and disequality of tokens. UDPN are well-structured, so the coverability and termination problems are decidable, but with higher complexity than for Petri nets. On the other hand, the problem of reachability for UDPN is surprisingly complex, and its decidability status remains open. In the talk, I will present the continuous reachability problem for UDPN, which can be seen as an over-approximation of the reachability problem. The main result is a characterization of continuous reachability for UDPN and polynomial time algorithm for solving it. This is a consequence of a combinatorial argument, which shows that if continuous reachability holds then there exists a run using only polynomially many data values.

    The talk is be based on the joint work with Utkarsh Gupta, Preey Shah, and Akshay S. that was published under the same title at FoSSaCS 2019.
  • Patrick Totzke. Timed Basic Parallel Processes

    Abstract:

    Timed basic parallel processes (TBPP) extend communication-free Petri nets (aka. BPP or commutative context-free grammars) by a global notion of time.

    TBPP can be seen as an extension of timed automata (TA) with context-free branching rules, and as such may be used to model networks of independent timed automata with process creation.

    We show that the coverability and reachability problems (with unary encoded target multiplicities) are PSPACE-complete and EXPTIME-complete, respectively. For the special case of 1-clock TBPP, both are NP-complete and hence not more complex than for untimed BPP. This contrasts with known super-Ackermannian-completeness and undecidability results for general timed Petri nets. As a result of independent interest, and basis for our NP upper bounds, we show that the reachability relation of 1-clock TA can be expressed by a formula of polynomial size in the existential fragment of linear arithmetic, which improves on recent results from the literature.

    The NP upper bound relies on a fun reduction that uses polynomial bounds on the number of event points in value functions of 1-dim. Priced Timed Games.

    This is joint work with Lorenzo Clemente and Piotr Hofman.
  • Sayan Mukherjee. Fast Algorithms for Handling Diagonal Constraints in Timed Automata

    Abstract:

    A popular method for solving reachability in timed automata proceeds by enumerating reachable sets of valuations represented as “zones”. A naive enumeration of zones does not terminate. Various termination mechanisms have been studied over the years, for instance – checking a simulation relation between zones to stop enumeration from certain zones. In the presence of “diagonal constraints”, coming up with efficient simulations is significantly more challenging.

    In previous work in CONCUR 2018, we came up with a simulation relation for automata with diagonal constraints. This simulation check resulted in exploring fewer zones – however, the check was NP-hard, and experiments were indeed not satisfactory in terms of time. There are a couple of other ways to handling diagonal constraints (without using simulations). But they are all prone to exponential blowup in the number of zones.

    In this talk, we describe a new simulation relation between zones that is good both with number of nodes and time taken. The check is still NP-hard, but the algorithm for this check does not seem to hit its worst case very often: experiments with an implementation of this simulation check show significant gains both in terms of number of zones explored and the time taken. To our knowledge, this is the first method which shows some promise in handling diagonal constraints.

    This presentation is based on a work with the same title accepted at CAV 2019.
    This is joint work with Paul Gastin (LSV, Paris-Saclay) and B Srivathsan (Chennai Mathematical Institute). An extended version of this work can be found at https://arxiv.org/abs/1904.08590 .

Session 7b: MARKOV PROCESSES AND STOCHASTIC HYBRID SYSTEMS

  • David Purser. The big-O Problem for Labelled Markov Chains

    Abstract:

    Given two labelled Markov chains, consider the problem of whether one is big-O of the other. More concretely, the problem asks whether the probability of every finite word in the first chain is smaller than some constant multiple of the probability in the second. We show that the problem is, in general, undecidable. Even when it is known one is big-O of the other, the problem of finding or approximating the constant in the big-O is also undecidable. Our main result shows the big-O problem is coNP-complete for unlabelled Markov chains (i.e. when the alphabet consists of a single character).

    The problem can be restated as a ratio total variation distance, which instead of finding the maximum difference between the probabilities of any two events, finds the maximum ratio between the probabilities of any two events. The problem has application to ε-differential privacy, for which the constant of the big-O notation is exactly exp(ε).

    Joint work with Dmitry Chistikov, Stefan Kiefer and Andrzej Murawski.
  • Jakob Piribauer. Long-run probabilities of path properties

    Abstract:

    We introduce the notion of long-run probabilities of paths properties in probabilistic systems. This quantitative measure indicates the degree to which a path property holds on the suffixes of a path. In verification, this measure could, e.g., be used to determine the expected proportion of time in which a system operates under normal conditions.

    The problem we address in this talk is the maximisation of the long-run probability of constrained reachability properties “a Until b” in finite-state Markov decision processes. For many problems in model-checking, the treatment of constrained reachability properties can be easily reduced to the treatment of reachability properties. In our maximisation problem, however, constrained reachability properties are significantly harder. While the maximal long-run probability of a reachability property can be computed in polynomial time, the threshold problem for the maximal long-run probability of constrained reachability properties is NP-hard.

    In the talk, an exponential time algorithm for the computation of maximal long-run probabilities of the constrained reachability property “a until b” will be presented. This algorithm follows the following two key steps: First, a labelled Markov decision process M can be transformed into an infinite-state weighted Markov decision process M’ such that the maximal long-run probability of “a until b” in M equals the maximal mean-payoff in M’. Second, there is a computable, at most exponentially large natural number K, called saturation point, such that some optimal scheduler behaves memorylessly whenever K consecutive a-states have been visited. This can be used to further transform M’ into a finite-state Markov decision process M” with the same maximal mean payoff.

    The question whether these results can be extended to regular co-safety properties specified by non-deterministic finite automata remains open.

    This talk is based on joint work with Christel Baier, Nathalie Bertrand, and Ocan Sankur accepted for publication at LICS’19.
  • Pierre Vandenhove. Reachability in Stochastic Hybrid Systems

    Abstract:

    Hybrid systems are dynamical systems combining continuous and discrete transitions. Continuous transitions represent the continuous evolution of various variables over time (e.g., according to a differential equation) and discrete transitions represent a change of mode of the system, where variables can be reset. Hybrid systems are widely used to model time-dependent reactive systems. In order to model uncertainty in a more subtle way, it can be valuable to make hybrid systems stochastic. We consider a class of stochastic hybrid systems (SHSs, for short) redefining hybrid systems by replacing non-determinism with randomness in both continuous and discrete transitions. We focus on reachability problems, which involve, e.g., deciding whether a specified set of states is reached almost surely (qualitative reachability), or approximating the probability of reaching a set (quantitative reachability). As reachability problems for hybrid systems are very prone to undecidability (already for rather restricted classes in the non-deterministic case), in order to obtain a decidable subclass, we need to balance restrictions to the discrete behavior and to the continuous behavior. Our approach is very restrictive toward the discrete transitions: we assume that the discrete transitions of our systems are strongly reset, i.e., that the value of the continuous variables cannot depend on their previous values before taking an edge, thereby decoupling the continuous behavior from the discrete behavior. A strong reset assumption has already been considered in [4, 3] for non-deterministic hybrid systems.

    To apprehend the stochasticity of the model, we follow the line of Bertrand et al. [2] on stochastic transition systems (STSs), which encompass stochastic real-time systems. They make use of the concept of decisiveness (from [1]), which describes a class of stochastic systems sharing many desirable properties with finite systems, and introduce a notion of abstraction for stochastic systems, very close to the classical bisimulation concept for transition systems. We apply this machinery to the specific case of SHSs. To do so, we first establish the decisiveness of the class of SHSs with strong resets, as well as the existence of a finite abstraction. We then show that for the class of stochastic hybrid systems with dynamics defined in an o-minimal structure whose theory is decidable (such as the ordered field of real numbers), along with a few classical hypotheses about the distributions, this abstraction is computable and qualitative reachability problems become decidable.

    In this talk, I will explain the main challenges facing reachability problems in SHSs and justify why the assumptions that we formulate to obtain decidability are natural and sufficient to overcome these challenges. This presentation is based on ongoing joint work with Patricia Bouyer, Thomas Brihaye, Mickael Randour, and Cédric Rivière.

    [1] Parosh A. Abdulla, Noomene Ben Henda, and Richard Mayr. Decisive Markov chains. Logical Methods in Computer Science, 3(4), 2007.
    [2] Nathalie Bertrand, Patricia Bouyer, Thomas Brihaye, and Pierre Carlier. When are stochastic transition systems tameable? J. Log. Algebr. Meth. Program., 99:41–96, 2018.
    [3] Thomas Brihaye, Christian Michaux, Cédric Rivière, and Christophe Troestler. On o-minimal hybrid systems. In HSCC’04: Hybrid Systems: Computation and Control, volume 2993 of Lecture Notes in Computer Science, pages 219–233. Springer, 2004.
    [4] Gerardo Lafferriere, George J. Pappas, and Shankar Sastry. O-minimal hybrid systems. Mathematics of Control, Signals, and Systems, 13(1):1–21, 2000.
  • Pranav Ashok. PAC Statistical Model Checking for Markov Decision Processes and Stochastic Games

    Abstract:

    Statistical model checking (SMC) is a technique for analysis of probabilistic systems that may be (partially) unknown. We present an SMC algorithm for (unbounded) reachability yielding probably approximately correct (PAC) guarantees on the results. We consider both the setting (i) with no knowledge of the transition function (with the only quantity required a bound on the minimum transition probability) and (ii) with knowledge of the topology of the underlying graph. On the one hand, it is the first algorithm for stochastic games. On the other hand, it is the first practical algorithm even for Markov decision processes. Compared to previous approaches where PAC guarantees require running times longer than the age of universe even for systems with a handful of states, our algorithm often yields reasonably precise results within minutes.
  • Georg Zetzsche. Presburger arithmetic with stars, rational subsets of graph groups, and nested zero tests

    Abstract:

    We study the computational complexity of existential Presburger arithmetic with (possibly nested occurrences of) a Kleene-star operator. In addition to being a natural extension of Presburger arithmetic, our investigation is motivated by two other decision problems.

    The first problem is the rational subset membership problem in graph groups. A graph group is an infinite group specified by a finite undirected graph. While a characterisation of graph groups with a decidable rational subset membership problem was given by Lohrey and Steinberg [J. Algebra, 320(2) (2008)], it has been an open problem (i) whether the decidable fragment has elementary complexity and (ii) what is the complexity for each fixed graph group. The second problem is the reachability problem for integer vector addition systems with states and nested zero tests.

    We prove that the satisfiability problem for existential Presburger arithmetic with stars is NEXP-complete and that all three problems are polynomially inter-reducible. Moreover, we consider for each problem a variant with a fixed parameter: We fix the star-height in the logic, a graph parameter for the membership problem, and the number of distinct zero-tests in the integer vector addition systems. We establish NP-completeness of all problems with fixed parameters.

    In particular, this enables us to obtain a complete description of the complexity landscape of the rational subset membership problem for fixed graph groups: If the graph is a clique, the problem is NL-complete. If the graph is a disjoint union of cliques, it is P-complete. If it is a transitive forest (and not a union of cliques), the problem is NP-complete. Otherwise, the problem is undecidable.

Break

Session 8: AUTOMATA ON WORDS

  • Corto Mascle. On Nonnegative Integer Matrices and Short Killing Words

    Abstract:

    Unambiguous automata are a popular compromise between non-deterministic and deterministic automata in terms of succinctness and complexity. A word is called a killing word for an automaton if it is not a factor of any word recognized by this automaton. Here we will discuss the problem of computing a short killing word for an unambiguous automaton, as well as possible extensions. This problem has applications in the theory of codes as well as in some restrictions of the matrix mortality problem. This is a joint work with Stefan Kiefer, published at STACS 2019.
  • Rafał Stefański. Single use register automata for data words

    Abstract:

    We introduce a new automaton model for data words, called single use register automata. These are like register automata for data words (introduced by Kaminski and Francez), with the restriction that every read access to a register destroys the register contents. We prove that the following models are equivalent: (a) one-way deterministic single use register automata; (b) two-way deterministic single use register automata; (c) languages recognised by orbit-finite semigroups.
  • Nicolas Mazzocchi. Two-way automata with Presburger acceptance

    Abstract:

    This years, I will speak about two-way Parikh automata. This class of automata is an extension of two-way finite automata where transitions are labelled by a vector of integers and the acceptance condition consists to check if the component-wise addition of vectors along the run satisfies a given Presburger formula. I will investigate the emptiness problem, introducing conditions for decidability and tight complexity results depending on the acceptance constraint (existential, universal, with fix alternance Presburger formula). Please, see the attached long abstract for further information.
  • Ismaël Jecker. On Equivalence Problem for Rational Relations

    Abstract:

    It is well-known that the containment and equivalence problems between rational relations is undecidable.
    In this talk, we study the decidability status of these problems between relations
    within certain subclasses of rational relations.
    In the 1960’s, Nivat showed that binary rational relations over a finite alphabet A
    correspond to 2-tape finite state automata that recognize regular languages L over
    the product alphabet {1,2}xA,
    where (i,a) is interpreted as reading letter a from tape i.
    Accordingly, a word w in L denotes the pair (u1, u2) in (A^*)^2
    in which ui is the projection of w onto i-labelled letters.
    Enforcing restrictions on the reading regime from the tapes,
    which we call synchronization, yields various sub-classes of relations.
    Such synchronization restrictions are imposed through regular properties on the projection of the language L onto {1,2}.
    In this way, for each regular language C over the alphabet {1,2}^*,
    one obtains a class REL(C) of relations.
    Synchronous, Recognizable, and Length-preserving rational relations are all examples of classes that can be defined in this way.
    We characterize through a decidable property the classes of synchronized relations inside which the containment and equivalence problems are decidable.
    Interestingly, these classes are proved to be exactly those which are closed under intersection.
  • Chris Köcher. Reachability Problems on Partially Lossy Queue Automata

    Abstract:

    We study the reachability problem for queue automata and lossy queue automata. Queue automata are known to be Turing-complete implying the undecidability of the reachability problem. In contrast to this result, reachability is decidable (but not primitive recursive) if the queue automata are allowed to forget any parts of their contents.

    A trivial approach to approximate the reachability problem is the step-by-step simulation of the queue automata’s computations. Boigelot et al. and Abdulla et al. improved this approximation by considering so-called “meta-transitions” which are special sequences of transformations. In particular, the iteration of some sequence of transformations is such a meta-transition. It was proven that, starting from a regular language of queue contents, the set of reachable queue contents after application of any meta-transition is effectively regular, again.

    Here, we want to improve these results by considering two further meta-transformations: on the one hand, we consider the iteration of certain languages of transformations. Especially, this is a generalization of the results from Boigelot et al. and Abdulla et al. On the other hand, we consider languages of transformations which are closed under certain context-sensitive commutations. We will see that the application of both meta-transformations preserves regularity. Both constructions are possible in polynomial time. Hence, this is a huge benefit for verification in contrast to the undecidability resp. inefficiency of the original reachability problem of reliable and lossy queue automata.
  • Corentin Barloy. A robust class of linear recurrence sequences

    Abstract:

    This is joint work with Nathanaël Fijalkow, Nathan Lhote and Filip Mazowiecki.

    Linear recurrence sequences (LRS) are defined very simply but many problems (as the Skolem problem) are not known to be decidable in the full generality of this setting. LRS are characterised by different models: weighted automata (WA) over a unary alphabet, linear cost-register automata, rational expressions and sequences whose formal power series are rational (a quotient of two polynomials).

    We present a new subclass of LRS, the class of poly-rational sequences, with the hope that some problems will be tractable in this subclass. Our subclass is characterised by submodels of each models. We restrict WA to automata that are polynomially-ambiguous, that is to say that the function that associates a word to his number of accepting paths is bounded by a polynomial. Cost-register automata are automata with write-only registers. We consider here copyless cost-register automata, that uses each register at most once during an update. Finally, we say that a sequence is computed by a poly-rational expressions if it is in the closure of arithmetic and geometric sequences under some operations. Those operations are component-wise addition and multiplication and so-called shift and shuffle. Our main theorem states that all those subclasses along with the class of sequences whose formal power series are rational with roots of rational numbers as poles and the class of sequences whose eigenvalues are roots of rational numbers are equals.

    This gives us strictness of the ambiguity hierarchy for weighted automata over a unary alphabet. We can note that this result is different from the one for tropical automata.

Informal Picnic

20

September 20th, Friday: Day 3

Invited Talk 2

  • Marie Fortin: Expressivity of first-order logic and star-free propositional dynamic logic over ordered structures

    Abstract:

    Star-free propositional dynamic logic is a variant of propositional dynamic logic (PDL) which is expressively equivalent to the 3-variable fragment of first-order logic. It is closely related to Tarski’s calculus of relations, and captures various temporal logics. In this talk, I will present sufficient conditions for star-free PDL and first-order logic to have the same expressive power over a class of linearly ordered structures with unary and binary relations; namely, that all binary relations are “interval-preserving”. This generalizes several results from the literature, in particular, the fact that linear orders as well as real-time signals have the 3-variable property. Another application is the synthesis of communicating finite-state machines from first-order specifications, using star-free PDL as an intermediate language.

coffee

Session 9: REGULARITY

  • Dominik Wojtczak. Omega-Regular Objectives in Model-Free Reinforcement Learning

    Abstract:

    We provide the first correct solution for model-free reinforcement learning of ω-regular objectives for Markov decision processes (MDPs). We present a constructive reduction from the almost-sure satisfaction of ω-regular objectives to an almost-sure reachability problem, and extend this technique to learning how to control an unknown model so that the chance of satisfying the objective is maximized. We compile ω-regular properties into limit-deterministic Büchi automata instead of the traditional Rabin automata; this choice sidesteps difficulties that have marred previous proposals. Our approach allows us to apply model-free, off-the-shelf reinforcement learning algorithms to compute optimal strategies from the observations of the MDP. We present an experimental evaluation of our technique on benchmark learning problems.

    This is joint work with Ernst Moritz Hahn, Mateo Perez, Sven Schewe, Fabio Somenzi and Ashutosh Trivedi and was published at TACAS 2019.
  • Simon Jantsch. From Linear Temporal Logic to Unambiguous Büchi Automata

    Abstract:

    This presentation considers a translation from Linear Temporal Logic (LTL) to unambiguous Büchi automata (UBA).
    Unambiguity, which requires that every word has at most one accepting run, has received attention in many different areas of automata theory.
    Still, fundamental questions remain open, as for example the exact complexity of deciding universality for UBA.
    In some cases it is known, however, that unambiguous automata yield better algorithms from a complexity point of view: for example, computing the probability induced by a language given as a UBA in a labeled Markov chain is possible in polynomial time, while the corresponding decision problem is PSPACE-complete for NBA.
    This motivates the search for good translations from LTL to UBA.

    Our translation is based on a known translation from LTL to NBA that uses very weak alternating automata (VWAA) as an intermediate representation.
    We introduce a new notion of unambiguity for alternating automata and show that the standard VWAA to NBA construction produces a UBA if the input VWAA is unambiguous.
    Checking whether a VWAA is unambiguous in this sense is PSPACE-complete.

    The main part of the construction is a disambiguation algorithm for VWAA.
    In order to produce small UBA, our disambiguation step iteratively finds “ambiguous” states, which are then adapted using local transformations.
    These local transformations use alternation and rely on the fact that VWAA can be efficiently complemented.
    In the end, the unambiguous VWAA is translated to a UBA by the standard translation.
    We also discuss an implementation of our procedure that includes several heuristics aimed at producing small UBA.

    This presentation is based on joint work with Christel Baier, Joachim Klein and David Müller.
  • Grzegorz Fabiański. Uniformization of MSO-definable relation on integers

    Abstract:

    We consider the problem to decide, whether a given MSO-definable relation R of the bi-infinite words (words over integers), there exist a MSO-definable function F (a functional relation between bi-infinite words) which uniformize it (meaning that if x R y then also x R f(x)).
    We argue that this problem is decidable.
  • Vincent Michielini. Uniformization problem for class of regular languages: a algebraic approach

    Abstract:

    Given R a binary relation between finite words over an alphabet A and
    finite words over an alphabet B (which we treat as a language over the
    product alphabet A × B), a uniformisation of it is a language L ⊆ R
    that chooses a single word over B, for each word in the projection of
    R. It is known that MSO, the full class of regular languages, is
    strong enough to define a uniformization for each of its relations.

    The question of uniformisation is a quest of other formalisms, weaker
    than MSO, which also have this property. This question is strongly
    linked with the axiom of choice: in which condition can we provide an
    “acceptable” choice? “acceptable” meaning “expressible in some
    language”.

    We solve this problem for pseudo-varieties of semigroups (pvs): we
    show that no nonempty pvs weaker than MSO can provide uniformizations
    for its relations. We also extend the uniformization problem to words
    of nonfinite domain: given an enumerable set, in which condition can
    we define a single element of it with an MSO-sentence?
  • Denis Kuperberg. Computational content of cyclic proofs for regular expressions

    Abstract:

    We investigate the computational content of a cyclic proof system for Kleene algebra. If e,f are regular expressions such that L(e) is contained in L(f), the sequent e -> f can be proved in a formal proof system. Different proofs of the same sequent can be
    interpreted as different programs mapping each word of e to a witness that it is in f. Therefore, the sequent e->f can also be interpreted as the type of such a program. We show that depending on whether the cut rule and some structural rules are allowed, the expressive power of this proof system matches different complexity classes, from regular languages to System T.

    This is joint work with Laureline Pinault and Damien Pous
  • Émile Hazard. An infinitary proof system for the inclusion of omega-regular languages

    Abstract:

    Proofs trees are usually supposed to be well-founded. However allowing infinite proof trees can be a fruitful generalization, in cases such as logics with fixed points. Moreover, regular non well-founded proofs (proofs having a finite number of subtrees) are still finite objects that can be processed by algorithms. Non well-founded proof systems for Kleene Algebra (and variants) were studied by Das and Pous in [1,2]. They introduced a non-well founded proof systems for Kleene Algebra, and show its soundness and regular completeness.
    In the present work, we push this approach further by designing a non well-founded proof system for omega-regular expressions. We prove the system to be sound and its cut-free regular fragment to be complete, with respect to inclusion of omega-regular languages.
    Contrarily to what happens in the case of finite words, we need here the formalism of hypersequents to obtain any form of completeness.
    This proof system is amenable to proof search thanks to cut-free regular completeness, and the regular proofs obtained can be used as certificates in a model-checking paradigm, with the advantage of being compositional. .

    This is joint work with Denis Kuperberg.

    [1] Anupam Das and Damien Pous. A cut-free cyclic proof system for Kleene algebra. In Proc.
    TABLEAUX, volume 10501 of Lecture Notes in Computer Science, pages 261–277. Springer Verlag, 2017.

    [2] Anupam Das and Damien Pous. Non-wellfounded proof theory for (Kleene+action)(algebras+lattices). In 27th EACSL Annual Conference on Computer Science Logic, CSL 2018, September 4-7, 2018, Birmingham, UK, pages 19:1–19:18, 2018.
  • Nathan Lhote. Computability and continuity of regular functions over ω-words

    Abstract:

    The class of regular functions from infinite words to infinite words is characterized by MSO-transducers, streaming ω-string transducers (transducers with registers) as well as deterministic two-way transducers with regular look-ahead. In their one-way restriction, the latter transducers define the class of rational functions.

    Computability of functions over ω-words is defined through deterministic type 2 Turing machines, which have an (infinite) input tape, several working tapes, and a right only output tape. Then the output of such a machine over an input ω-word, is the ω-word eventually produced along the run on the output tape.

    While it is easy to see that computability entails continuity, the converse does not hold in general. We show however that in the case of functions which effectively preserve regular languages by inverse image, continuity and computability are equivalent. We give a generic characterization of continuity for functions preserving regular languages by inverse image. Using this we are able to decide computability of rational functions in NLogSpace (it was already known to be in PTime by Prieur) and regular functions in PSpace.

    This is joint work with Vrunda Dave, Shankara Narayanan Krishna and Emmanuel Filiot.
  • Christopher Hugenroth. Separating Languages over Infinite Words with Product Automata

    Abstract:

    We study the separation problem for regular languages over infinite words with respect to the Wagner hierarchy. The membership problem is known to be efficiently decidable in this setting. The decision procedure for membership analyses the loop structure of a given automaton. We show that it suffices to analyse the loop structure of the product automaton of two automata in order to decide separability of languages defined by deterministic automata.

    This leads to an algorithm that determines for two languages the minimal Wagner class that contains a separator and this algorithm runs in polynomial time if the languages are given as deterministic Muller or parity automata. Further, a minimal separator can be constructed in exponential time.

Break

Session 10: MEAN-PAYOFF GAMES, REACHABILITY GAMES, AND PATROLLING GAMES

  • Véronique Bruyère. Energy mean-payoff games

    Abstract:

    In this paper, we study one-player and two-player energy mean-payoff games. Energy mean-payoff games are games of infinite duration played on a finite graph with edges labeled by 2-dimensional weight vectors. The objective of the first player (the protagonist) is to satisfy an energy objective on the first dimension and a mean-payoff objective on the second dimension. We show that optimal strategies for the first player may require infinite memory while optimal strategies for the second player (the antagonist) do not require memory. In the one-player case (where only the first player has choices), the problem of deciding who is the winner can be solved in polynomial time while for the two-player case we show co-NP membership and we give effective constructions for the infinite-memory optimal strategies of the protagonist.

    This is a joint work with Quentin Hautem, Mickael Randour, and Jean-rançois Raskin
  • Daniele Dell’Erba. Quasi Dominion Measures for Mean-Payoff Games

    Abstract:

    We propose a novel algorithm for the solution of mean-payoff games, which merges together two seemingly unrelated concepts introduced in the context of parity games, small progress measures and quasi dominions. We show that the integration of the two notions can be highly beneficial and can significantly speed up convergence to the solution. Experiments point outs that the resulting algorithm performs orders of magnitude better than the best solution algorithm currently known.
  • Pierre Ohlmann. Universal Graphs for Mean Payoff Games

    Abstract:

    We study the computational complexity of solving mean payoff games. This class of games can be seen as an extension of parity games, and they have similar complexity status: in both cases solving them is in NP∩coNP and not known to be in P. In a breakthrough result Calude, Jain, Khoussainov, Li, and Stephan constructed in 2017 a quasipolynomial time algorithm for solving parity games, which was quickly followed by two other algorithms with the same complexity. Our objective is to investigate how these techniques can be extended to the study of mean payoff games.
    The starting point is the notion of separating automata, which has been used to present all three quasipolynomial time algorithms for parity games and gives the best complexity to date. The notion naturally extends to mean payoff games and yields a class of algorithms for solving mean payoff games. The contribution of this paper is to prove tight bounds on the complexity of algorithms in this class. We construct two new algorithms for solving mean payoff games. Our first algorithm depends on the largest weight N (in absolute value) appearing in the graph and runs in sublinear time in N, improving over the previously known linear dependence in N. Our second algorithm runs in polynomial time for a fixed number k of weights.
    We complement our upper bounds by providing in both cases almost matching lower bounds, showing the limitations of the separating automata approach. We show that we cannot hope to improve on the dependence in N nor break the linear dependence in the exponent in the number k of weights. In particular, this shows that separating automata do not yield a quasipolynomial algorithm for solving mean payoff games.

    This is a joint work with Nathanael Fijalkow and Pawel Gawrychowski
  • Aline Goeminne. On Relevant Equilibria in Reachability Games

    Abstract:

    We study multiplayer reachability games played on a finite directed graph equipped with target sets, one for each player. We are interested in both the qualitative setting where each player wants to reach his target set and the quantitative setting where each player wants to reach it as quickly as possible. In those qualitative/quantitative reachability games, it is known that there always exists a Nash equilibrium (NE) and a subgame perfect equilibrium (SPE). But sometimes several equilibria may coexist such that in one equilibrium no player reaches his target set whereas in another one several players reach it. It is thus very natural to identify “relevant” equilibria. We consider several notions of interesting equilibria including Pareto optimal equilibria and equilibria with high social welfare. Our contribution is twofold. First, we identify subclasses of qualitative/quantitative reachability games where an interesting equilibrium always exists. Secondly, we provide complexity results for various related decision problems.

    It is joint work with Thomas Brihaye, Véronique Bruyère and Nathan Thomasset.

  • Marie Van Den Bogaard. The Complexity of Subgame Perfect Equilibria in Quantitative Reachability Games

    Abstract:

    In this talk, we present the main ideas behind an algorithm to decide in polynomial space the constrained existence problem for Subgame Perfect Equilibria in quantitative reachability games.
  • David Klaška, Antonín Kučera, Tomáš Lamser and Vojtěch Řehák. Automatic Synthesis of Efficient Regular Strategies in Adversarial Patrolling Games

    Abstract:

    We give a polynomial-time algorithm for synthesizing efficient regular strategies in adversarial patrolling games with general topology. Regular strategies use finite memory to gather some relevant information about the history of Defender’s moves which results in substantially better protection of the targets. So far, the scope of automatic strategy synthesis was limited to positional strategies (which ignore the history) or to regular strategies where the underlying finite-memory observer had to be supplied \emph{manually}. Furthermore, the existing methods do not give any information on how far are the constructed strategies from being optimal. In this paper, we try to overcome these limitations. We develop a novel \emph{gradient-based} algorithm for synthesizing regular strategies where the underlying finite-memory observers are constructed \emph{algorithmically}. The running time of our algorithm is \emph{polynomial} which makes the algorithm applicable to instances of \emph{realistic size}. Furthermore, we develop an algorithm for computing an \emph{upper bound} on the best achievable protection, and compare the quality of the constructed strategies against this bound. Thus, we can effectively measure the “distance” of the constructed strategies from optimal strategies, and our experiments show that this distance is often quite small.

Closing Remarks

designed and delivered