#### Invited Talks and Tutorials

#### Invited Talks

#### Accepted Presentations

Polyregular functions are the class of string-to-string functions definable by pebble transducers, an extension of finite-state automata with outputs and multiple two-way reading heads (pebbles) with a stack discipline. If a polyregular function can be computed with k pebbles, then its output length is bounded by a polynomial of degree k in the input length. But Bojańczyk has shown that the converse fails. We provide two alternative easier proofs. The first establishes by elementary means that some quadratic polyregular function requires 3 pebbles. The second proof - just as short, albeit less elementary - shows a stronger statement: for every k, there exists some polyregular function with quadratic growth whose output language differs from that of any k-fold composition of macro tree transducers (and which therefore cannot be computed by a k-pebble transducer). Along the way, we also refute a conjectured logical characterization of polyblind functions.

In the past few years, Cécilia Pradic and I have been pursuing a research programme aimed at uncovering connections of the form: "the languages/functions recognized by some flavor A of automata are exactly those definable in some typed λ-calculus P, using a certain input/output convention". λ-calculi are minimalistic functional programming languages, and type systems have been used to achieve interesting restrictions on their expressive power in a field called implicit computational complexity – hence the title. The I/O conventions that we use are based on Church encodings, whose connections with automata theory have been leveraged in higher-order model checking (HOMC).Our main inspiration is a result of Hillebrand and Kanellakis (1996): A = regular languages, P = simply typed λ-calculus. We have managed to characterize star-free languages, regular tree functions (in a similar fashion to Gallot, Lemay & Salvati 2020) and comparison-free polyregular functions (a class whose introduction was motivated by our work on λ-calculi!) using type systems based on linear logic (a programming language counterpart of various "copyless" or "single use" restrictions in automata theory). These developments, and the semantic/category-theoretic tools that they involve, are recapitulated in the longer abstract https://cs-web.swan.ac.uk/~cpradic/smp-abstract.pdfMore recently, I have devised a transducer model, based on the collapsible pushdown automata used in HOMC, that captures the tree-to-tree functions computable by simply typed λ-terms, thus answering a question going back at least to the 1980s. This work can also be seen as a revival of Engelfriet and Vogler's investigations into "high level tree transducers", also in the 1980s.

We introduce one deterministic-counter automata (ODCA), which are one-counterautomata where all runs labelled by a given word have the same counter effect,a property we call counter-determinacy. ODCAs are an extension of visiblyone-counter automata - one-counter automata (OCA), where the input alphabetdetermines the actions on the counter. They are a natural way to introducenon-determinism/weights to OCAs while maintaining the decidability of crucialproblems that are undecidable on general OCAs. For example, the equivalenceproblem is decidable for deterministic OCAs, whereas it is undecidable fornon-deterministic OCAs. We consider both non-deterministic and weighted ODCAs.Our work shows that the equivalence problem is decidable in polynomial timefor weighted ODCAs over a field and polynomial space for non-deterministicODCAs. As a corollary, we get that the regularity problem, i.e., the problem ofchecking whether an input weighted ODCA is equivalent to some weightedautomaton, is also in polynomial time. Furthermore, we show that the coveringand coverable equivalence problems for uninitialised weighted ODCAs aredecidable in polynomial time. We also introduce a few reachability problemsthat are of independent interest and show that they are in P. Thesereachability problems later help in solving the equivalence problem.https://arxiv.org/abs/2301.13456

We introduce the 2-sorted counting logic GC^k that expresses properties of hypergraphs.This logic has available k "blue" variables to address edges, an unbounded number of "red" variables to address vertices of a hypergraph, and atomic formulas E(e,v) to express that vertex v is contained in edge e.We show that two hypergraphs H, H' satisfy the same sentences of the logic GC^k if, and only if, they are homomorphism indistinguishable over the class of hypergraphs of generalised hypertree width at most k.Here, H, H' are called homomorphism indistinguishable over a class C if for every hypergraph G in C the number of homomorphisms from G to H equals the number of homomorphisms from G to H'.This result can be viewed as a generalisation (from graphs to hypergraphs) of a result by Dvorak (2010) stating that any two (undirected, simple, finite) graphs H, H' are indistinguishable by the (k+1)-variable counting logic C^{k+1} if, and only if, they are homomorphism indistinguishable on the class of graphs of tree width at most k.This is joint work with Nicole Schweikardt.

Turn-based stochastic games (aka simple stochastic games) are two-player zero-sum games played on directed graphs with probabilistic transitions.The goal of player-max is to maximize the probability to reach a target state against the adversarial player-min.These games lie in NP ∩ coNP but the existence of polynomial-time algorithm is a major open question, all known deterministic algorithms require exponential time in the worst-case.An important open question has been whether faster algorithms can be obtained parametrized by the treewidth of the game graph.In this work our main result is a deterministic algorithm to solve turn-based stochastic games that, given a gamewith n states, treewidth at most t, and the bit-complexity of the probabilistic transition function log D, has running time O((tn^2 log D)^(t log n)). In particular, our algorithm is quasi-polynomial time for games with constant or poly-logarithmic treewidth.The main result was published in SODA'23.

The Rabin tree theorem yields an algorithm to solve the satisfiability problem for monadic second-order logic over infinite trees. Here we solve the probabilistic variant of this problem. Namely, we show how to compute the probability that a randomly chosen tree satisfies a given formula. We additionally show that this probability is an algebraic number. This closes a line of research where similar results were shown for formalisms weaker than the full monadic second-order logic.Joint work with Damian Niwiński and Michał Skrzypczak, submitted (under review).

Recent years have witnessed a growing interest in reasoning about the finite-horizon counterpart of $\ltl$, called $\ltlf$, for systems of unbounded but finite horizons. While the problems of satisfiability and synthesis from $\ltlf$ specifications have been studied extensively, the verification problem has surprisingly been overlooked. To this end, this work presents the \emph{first} study of model-checking from $\ltlf$ specifications. We observe that there are striking differences between $\ltlf$ and $\ltl$ model checking. Most significantly, under the same non-terminating semantics of models, $\ltlf$ model checking is $\expspace$-complete, making it exponentially harder than $\ltl$ model checking. This is unexpected since one of the attributes behind the success of $\ltlf$ is that problems over $\ltlf$ have so far been {\em perceived} to be at most as hard as thoseon $\ltl$, if not easier. For instance, (a). reasoning about $\ltlf$ deals with automata over finite words whereas $\ltl$ requires automata over infinitewords, (b). the complexity of reactive synthesis and satisfiability from $\ltlf$ and $\ltl$ are identical, and so on.We also show that under \emph{terminating} semantics, $\ltlf$ model checking is \pspace-complete. Thus, demonstrating the importance of semantics in model checking for finite-horizon temporal specifications.

We propose a novel algorithm to decide the language inclusion between (nondeterministic) Buchi automata, a PSpace-complete problem. Our approach, like others before, leverage a notion of quasiorder to prune the search for a counterexample by discarding candidates which are subsumed by others for the quasiorder. Discarded candidates are guaranteed to not compromise the completeness of the algorithm. The novelty of our work lies in the quasiorder used to discard candidates. We introduce FORQs (family of right quasiorders) that we obtain by adapting the notion of family of right congruences put forward by Maler and Staiger in 1993. We define a FORQ-based inclusion algorithm which we prove correct and instantiate it for a specific FORQ, called the structural FORQ, induced by the Buchi automaton to the right of the inclusion sign. The resulting implementation, called Forklift, scales up better than the state-of-the-art on a variety of benchmarks including benchmarks from program verification and theorem proving for word combinatorics.

We consider the following decision problems: given a finite, rational Markov chain, source and target states, and a rational threshold, does there exist an n such that the probability of reaching the target from the source in n steps is equal to the threshold (resp. crosses the threshold)? These problems are known to be equivalent to the Skolem (resp. Positivity) problems for Linear Recurrence Sequences (LRS). These are number-theoretic problems whose decidability has been open for decades. We present a short, self-contained, and elementary reduction from LRS to Markov Chains that improves the state of the art as follows: (a) We reduce to ergodic Markov Chains, a class that is widely used in Model Checking. (b) We reduce LRS to Markov Chains of significantly lower order than before. We thus get sharper hardness results for a more ubiquitous class of Markov Chains. Immediate applications include problems in modeling biological systems, and regular automata-based counting problems.

For a metric over words, we define the distance between two transducers with identical domain as the supremum of the distances of their respective outputs over all inputs. We argue that this is a useful generalisation for comparing transducers that goes beyond the equivalence problem. Two transducers are said to be \emph{close} (\textit{resp.} $k$-close) with respect to a metric if their distance is bounded (\textit{resp.} at most $k$). Computing distance between transducers is equivalent to deciding closeness and $k$-closeness, over integer-valued metrics. We show that closeness and $k$-closeness are decidable for rational transducers w.r.t.~common edit distances. Hence the distances with respect to them are also computable. The proof relies on a generalisation of theorems of Lyndon-Sch\"utzenberger from combinatorics of words. Stated concisely, the theorem reads that a sumfree (generated by an regular expression free of unions) set of pairs is conjugate if and only if there is a word witnessing the conjugacy of all the pairs.An interesting corollary of our results is that it can decided if the outputs of two sequential or rational transducers are conjugates for all inputs.

Choiceless Polynomial Time (CPT) is one of the few remaining candidate logics for capturing PTIME. We make progress towards separating CPT from polynomial time by firstly establishing a connection between the expressive power of CPT and the existence of certain symmetric circuit families, and secondly, proving lower bounds against these circuits. We focus on the isomorphism problem of unordered Cai-Fürer-Immerman-graphs (the CFI-query) as a potential candidate for separating CPT from P. Results by Dawar, Richerby and Rossman, and subsequently by Pakusa, Schalthöfer and Selman show that the CFI-query is CPT-definable on linearly ordered and preordered base graphs with small colour classes. We define a class of CPT-algorithms, that we call "CFI-symmetric algorithms", which generalises all the known ones, and show that such algorithms can only define the CFI-query on a given class of base graphs if there exists a family of symmetric XOR-circuits with certain properties. These properties include that the circuits have the same symmetries as the base graphs, are of polynomial size, and satisfy certain fan-in restrictions. Then we prove that such circuits with slightly strengthened requirements (i.e. stronger symmetry and fan-in and fan-out restrictions) do not exist for the n-dimensional hypercubes as base graphs. This almost separates the CFI-symmetric algorithms from polynomial time - up to the gap that remains between the circuits whose existence we can currently disprove and the circuits whose existence is necessary for the definability of the CFI-query by a CFI-symmetric algorithm.

We study the problem of computing reachable blocks of the simulation equivalence and design algorithms for this problem by interleaving reachability and simulation computation while possibly avoiding the computation of all the reachable states or the whole simulation preorder.The interest in computing simulation blocks stems from the fact that simulation provides a better state space reduction than other equivalence notions such as bisimilarity, yet retaining enough precision for model checking.On the other hand, the problem turns out to be significantly harder than the one for the bisimulation case.We put forward a sound algorithm manipulating state partitions and relations between their blocks, suited for processing infinite-state systems. This is a joint work with Pierre Ganty and Francesco Ranzato.

Seminal results establish that the coverability problem for Vector Addition Systems with States (VASS) is in EXPSPACE (Rackoff, '78) and is EXPSPACE-hard already under unary encodings (Lipton, '76).More precisely, Rosier and Yen later utilise Rackoff's bounding technique to show that if coverability holds then there is a run of length at most n^(2^(O(d*log(d))), where d is the dimension and n is the size of the given unary VASS.Earlier, Lipton showed that there exist instances of coverability in d-dimensional unary VASS that are only witnessed by runs of double-exponential length that is at least n^(2^(Omega(d))).In this presentation, I will outline how to carefully strengthen Rackoff's approach to improve the upper bound to show that if coverability holds then there exists a run of length at most n^(2^(O(d))).Moreover, we therefore obtain an optimal 2^(O(d))*log(n)-space algorithm for coverability.This closes a forty-one-year-old gap that was stated as an open problem by Mayr and Meyer.Furthermore, we obtain a deterministic n^(2^(O(d)))-time algorithm for coverability in unary d-VASS.We show that this is optimal under the Exponential Time Hypothesis, in particular by leveraging the time required to detect a clique in a graph.I also plan to present the approach and techniques used in our reduction from finding cliques in graphs to coverability in unary VASS.Ultimately, this yields a conditional matching lower bound that coverability in unary d-VASS requires n^(2^(Omega(d)))-time.This presentation concerns ongoing joint work with Marvin Künnemann, Filip Mazowiecki, Lia Schütze, and Karol Węgrzycki. Please see the attached extended abstract for greater overview of our recent work.

The presentation aims at presenting a mathematical setting for attack-defense trees, a classic graphical model to specify attacks and nested countermeasures,that we equip with trace language semantics allowing to have an original dynamic interpretation of countermeasures. Interestingly, the expressiveness of attack-defense trees coincides with star-free languages, and the nested countermeasures impact the expressiveness of attack-defense trees. With an adequate notion of counter-depth, we exhibit a strict hierarchy of the star-free languages that does not coincides either with the dot-depth hierarchy, or with the first-order logic alternation quantifier hierarchy on the very first levels. Additionally, driven by the use of attack-defense trees in practice, we address the decision problems of trace membership and of non-emptiness, and study their computational complexities parameterized by the counter-depth. The presentation is based on a joint work with Sophie Pinchinat and Thomas Brihaye.

How to elegantly determine the best fit of (in)equivalences for a pair of processes? We generalize the well-known bisimulation game to find distinguishing Hennessy-Milner logic formulas for finite-state processes covering van Glabbeek's linear-time–branching-time spectrum. By quantifying expressive power along six dimensions, we can quickly identify distinctions belonging to the coarsest distinguishing behavioral preorders and equivalences. The compared processes are equivalent in each coarser behavioral equivalence from the spectrum.The core of the talk is based on joint work with David N. Jansen and Uwe Nestmann, published in LMCS https://doi.org/10.46298/lmcs-18(3:19)2022. A prototype implementation of our generic equivalence-checking runs on https://equiv.io/.

Recently Ehlers&Schewe proposed a canonical minimal representation foromega-regular languages [1]. The representation consists of a canonical chainof co-Büchi languages (COCOA): A1 ⊃ A2 ⊃ ... ⊃ An, where each Ai is expressibleby a good-for-games co-Büchi automaton, canonical and minimal by the work ofAbu Radi and Kupferman [2]. This work presents a first application of the newrepresentation, to the problem of reactive synthesis. In reactive synthesis wewant to automatically construct a transducer from a specification in e.g.linear temporal logic. The flow of our construction is: translate a given LTLinto a parity automaton and into COCOA using [1,2], then we construct amu-calculus fixpoint formula which is evaluated on a game graph of thesynthesis problem. The resulting procedure generalizes the widely knownGeneralized Reactivity (1) synthesis approach to full LTL while inheriting itsefficiency advantages on the original specification class.This is a joint work with Rüdiger Ehlers (main contributor). [1]: Natural Colors of Infinite Words, by Rüdiger Ehlers and Sven Schewe[2]: Minimization and Canonization of GFG Transition-Based Automata, by Bader {Abu Radi} and Orna Kupferman

Strategy logics has two major flaws: the decision problems associated have a high complexity (model checking is non elementary and satisfiability is undecidable) and the strategy quantified might not be feasible. However, these drawbacks appear to be related as the fragment One Goal has a 2-EXPTIME complexity and feasible strategies suffice to verify a formula. In this work, we propose an approach to define a logic suited to reason about feasible strategies by changing the domain of quantification. We replace strategies with plans (sequences of actions) and define a behavioral compositional semantics to ensure feasibility. This allows for the definition of an equivalent game-theoretic semantics and then, 2-EXPTIME decision procedures.

Nowhere density is a seminal notion in algorithmic graph theory and combinatorics, that generalises numerous well-behaved properties of classes of finite graphs. For monotone classes, nowhere density appears to be a key dividing line that separates tame from wild behaviour. From the perspective of model theory, Adler and Adler established that the monotone nowhere dense classes are precisely those that are monadically NIP, i.e. they do not transduce the class of all graphs, and conjectured that the same phenomenon occurs for classes of arbitrary relational structures and their corresponding Gaifman graphs. In this talk, I discuss recent results of Braunfeld, Dawar, Eleftheriadis, and Papadopoulos (2023) that settle the question of Adler-Adler, and survey few results from graph sparsity theory that generalise to the relational setting.

The fixed-point logic LREC= was developed by Grohe et al. (CSL 2012) in the quest for a logic to capture all problems decidable in logarithmic space.It extends FO+C, first-order logic with counting, by an operator that formalises a limited form of recursion.We show that for every LREC=-definable property on relational structures,there is a constant k such that the k-variable fragment of first-order logic with counting quantifiers expresses the property via formulae of logarithmic quantifier depth.This yields that any pair of graphs separable by the property can be distinguished with the k-dimensional Weisfeiler-Leman algorithm in a logarithmic number of iterations.In particular, it implies that the algorithm identifies every interval graph and every chordal claw-free graph in logarithmically many iterations.Joint work with Steffen van Bergerem, Martin Grohe, and Sandra Kiefer

We study two-player zero-sum games played on graphs and make contributions toward the following question: given an objective, how much memory is required to play optimally for that objective? We study regular objectives, where the goal of one of the two players is that eventually the sequence of colors along the play belongs to some regular language of finite words. We obtain different characterizations of the chromatic memory requirements for such objectives for both players, from which we derive complexity-theoretic statements: deciding whether there exist small memory structures sufficient to play optimally is NP-complete for both players. Some of our characterization results apply to a more general class of objectives: topologically closed and topologically open sets.These results are based on joint work with Patricia Bouyer, Nathanaël Fijalkow, and Mickael Randour.

We consider the verification of distributed systems composed of an arbitrary number of asynchronous processes. Processes are identical finite-state machines communicating by reading from and writing to a shared memory. Beyond the standard model with finitely many registers, we tackle round-based shared-memory systems with fresh registers at each round. In the latter model, both the number of processes and the number of registers are unbounded, making verification particularly challenging. The properties studied are generic presence reachability objectives that subsume classical questions such as safety or synchronization by expressing the presence or absence of processes in some states. In the more general round-based setting, we establish that the parameterized verification of presence reachability properties is PSPACE-complete. Moreover, for the roundless model with finitely many registers, we prove that the complexity drops down to NP-complete and we provide several natural restrictions that make the problem solvable in polynomial time.

We show that the big-O problem for max-plus automata, i.e. weighted automata over the semiring (N ∪ {-∞}, max, +), is decidable and PSPACE-complete. The big-O (or affine domination) problem asks whether, given two max-plus automata computing functions f and g, there exists a constant c such that f ≤ cg + c. This is a relaxation of the containment problem asking whether f ≤ g, which is undecidable. Our decidability result uses Simon's forest factorisation theorem, and relies on detecting specific elements, that we call witnesses, in a finite semigroup closed under two special operations: stabilisation and flattening.Joint work with Laure Daviaud.

In this talk, I will discuss the descriptive complexity theory of finite groups by examining the power of the second Ehrenfeucht--Fraïss\'e bijective pebble game in Hella's (Ann. Pure Appl. Log., 1989) hierarchy. This is a Spoiler-Duplicator game in which Spoiler can place up to two pebbles each round. While it trivially solves graph isomorphism, it may be nontrivial for finite groups, and other ternary relational structures. Our main result is that, in the pebble game characterization, only $O(1)$ pebbles and $O(1)$ rounds suffice to identify all groups without Abelian normal subgroups (a class of groups for which isomorphism testing is known to be in \textsf{P}; Babai, Codenotti, \& Qiao, ICALP 2012). By Hella's results (ibid.), this is equivalent to saying that these groups are identified by formulas in first-order logic with generalized $2$-ary quantifiers, using only $O(1)$ variables and $O(1)$ quantifier depth.This is joint work with Joshua A. Grochow.

In the context of infinite duration games over graphs, a central question is: For which objectives can the existential player always play optimally using positional (memoryless) strategies? We present a characterization of positionality for the class of omega-regular objectives. Namely, an omega-regular objective is positional if and only if it is recognized by a good-for-games parity automaton admitting a total order over its set of states such that its transitions satisfy some monotonicity properties with respect to this order. Using this characterization, we hope to answer many open questions concerning positionality in the case of omega-regular objectives, including obtaining finite-to-infinite and 1-to-2-players arenas lifts and the closure under union of prefix-independent positional objectives.This talk is based on ongoing work with Pierre Ohlmann.

AbstractConsider the following supervised classification problem: given a backgroundstructure with a set of instances X and a training set S ⊆ X × {+, −}, we aimto find a hypothesis h : X → {+, −} from some hypothesis class H, that mapsevery instance to a class. Such a hypothesis is consistent with the training ex-ample when h(x) = c for (x, c) ∈ S. We consider a setup where the backgroundstructure is a relational structure A and an instance v̄ ∈ A^d is a d-tuple ofelements from the universe. We aim to learn a monadic second-order formulaφ(x̄) with |x̄| = d free instance variables, that is consistent with the trainingset. This means, that for every positive example in the training set (v̄, +) ∈ Swe have A |= φ(v̄) and for (v̄, −) ∈ S we have A ⊭ φ(v̄). We call the problemMSO-Learn and for d = 1 we speak of unary-MSO-Learn.We analyze the problem in the context of parameterized complexity and show that it is fixed-parameter tractable for d=1 on structures of bounded tree-width and graphs of bounded clique-width. Furthermore, we give bounds for d>1 in the PAC learning framework.

It is known that first-order logic with some counting extensions canbe efficiently evaluated on graph classes with bounded expansion, wheredepth-$r$ minors have constant density. More precisely, the formulasare $\exists x_1 ... x_k \#y \phi(x_1,\ldots,x_k, y)>N$,where $\phi$ is an FO-formula.If $\phi$ is quantifier-free, we canextend this result to \emph{nowhere dense} graph classes with analmost linear FPT run time. Lifting thisresult further to slightly more general graph classes, namely almost nowheredense classes, where the size of depth-$r$ clique minors is subpolynomial,is impossible unless $\rm FPT=W[1]$. On the other hand, in almostnowhere dense classes we can approximate such counting formulas with asmall additive error.In particular, it follows that partial covering problems, such as partialdominating set, have fixed parameter algorithms on nowhere densegraph classes with almost linear running time.

Designing computational reductions is a challenging task for students. Inthis paper we explore formal foundations for offering support for thislearning task by teaching support systems. Our approach is to (a)identify typical building blocks of reductions, (b) develop a simpledescriptional language for specifying reductions that allows forcombining building blocks in a simple, modular manner, (c) study theexpressive power of such a language to ensure that it is reasonablybroad, and to (d) design suitable algorithms for employing this languagein teaching support systems, including the design of feedback mechanisms.This is joint work with Julien Grange, Nils Vortmeier and Thomas Zeume.

Recently, Frits W. Vaandrager et al. [1] studied learning (automatic construction using the well-known L* algorithm [2]) finite state machines augmented with a timer. Timers are an often used mechanism in many contexts, such as network protocol and control software, in which a system sends a message and, if after a certain amount of time, an acknowledgement message is not received, the system sends the same message, i.e., we rely on a timer.In a machine with timers, we start a timer by setting it at a certain constant. Whenever a timer reaches zero, it produces a special "time out" symbol that triggers a transition in the finite state machine. Obtaining a learning algorithm for machines with multiple timers is not easy as multiple actions can occur at the same time (if multiple timers reach zero concurrently, for instance). In these "races", the machine decides non-deterministically in which order these actions are processed. Moreover, a transition may kill or restart a timer. It is thus not easy, from an execution trace containing races, to be certain of whether a timer is active and, if it is, at which value it was set. Moreover, as these races do not appear in the single timer case, we can not straightforwardly extend the techniques from [1].Therefore, in this joint work with Véronique Bruyère (University of Mons), Guillermo A. Pérez (University of Antwerp), and Frits W. Vaandrager (Radboud Universiteit), we lay the first steps towards a learning algorithm for finite state machines augmented with a finite number of timers by studying how one can remove these races while still observing the actions in the same order. We also prove that the reachability problem is PSPACE-complete.[1] Frits W. Vaandrager, Roderick Bloem, Masoud Ebrahimi. Learning Mealy Machines with One Timer. Information and Computation (2023).[2] Dana Angluin. Learning Regular Sets from Queries and Counterexamples. Information and computation 75.2 (1987).

In dynamic analysis of programs, one observes an execution (sequence of events) and checks whether the execution satisfies or violates some property. When the programs are concurrent, such a simple check is often not robust to nondeterministic thread scheduling. In this case, the more appropriate question is the predictive version: given an execution, is there a semantically equivalent execution that satisfies or violates the property of interest. In concurrency theory, when equivalence is characterised by Mazurkiewicz traces, the corresponding question is called membership problem for trace language (MPTL). In 1989, A. Bertoni, G. Mauri, and N. Sabadini showed that MPTL can be solved in time O(n^\alpha) when the property is described using a regular language; n is the size of the execution (or string) and \alpha describes the width of the concurrency relation. We show that, in fact, this problem cannot be solved in better time by showing matching (conditional) lower bound of \Omega(n^\alpha) under the Strong Exponential Time Hypothesis. This result applies to even star-free regular languages. Motivated by practical applicability of runtime monitoring in a predictive setting, we propose a class of languages, called pattern languages, for which MPTL can be solved in constant space and linear time, in other words, using a DFA. Pattern languages essentially encode the existence of a subsequence: for every Pattern language L, there is finite string u such that every member string w in L has u as a subsequence. Pattern languages is a subclass of star-free regular languages and is closed under union, intersection, concatenation, and Kleene star.

This talk discusses the problem of efficiently solving parity games whereplayer Odd has to obey an additional strong transition fairness constraint onits vertices. We call such games Odd-fair Parity games. Banerjee et. al. showed that Odd-fair parity games can be solved via a fixed-point formula thathighly resembles the one for ‘regular’ parity games. In this talk, we presenta new a Zielonka-type algorithm for solving Odd-fair parity games, which notonly shares the same worst-case complexity as Zielonka’s algorithm for regularparity games but also preserves the algorithmic advantage that Zielonka’s algorithm possesses over other parity solvers with exponential time complexity, suchas Banerjee et. al.’s fixed-point algorithm. Zielonka’s algorithm’s establishedreputation for solving parity games efficiently makes the Odd-fair Zielonka’s algorithm presented in this talk the best known generic algorithm to solve Odd-fairparity games.

We consider the parameterized verification of a model representing an arbitrarily large network of agents which communicate by broadcasting and receiving messages. The broadcast topology is reconfigurable, hence a message sent may be received by an arbitrary subset of agents. Agents have local registers which contain identifiers. They may broadcast, receive, store and compare these values using their registers. We consider the coverability problem, where one asks whether a given state of the system may be reached by at least one agent. We show that this problem is decidable using a refined abstraction of runs; however, it is as hard as coverability in lossy channel systems, which is Hyper-Ackermannian. We contrast this with the undecidability of the closely-related target reachability problem on one hand, and with the NP-completeness of the problem when each agent has only one register on the other.This is joint work with Lucie Guillou and Nicolas Waldburger.

In Petri nets with data, tokens carry values that can be tested during the execution of transitions. In the case of equality data we can check if two tokens carry the same data value, in the case of ordered data we can check if the data value in a token is greater/less than the data value in some other token. The decidability status of the reachability problem in Petri nets with equality data is still open, while the same problem for Petri nets with ordered data is known to be undecidable.We prove the decidability of the reachability problem, both for equality and ordered data, in the subclass of symmetric Petri nets, where each transition has its inverse. Towards this aim, we lift the classical theory of Groebner bases to ideals of polynomials over an orbit-finite set of variables.This is a work in progress, conducted jointly with Piotr Hofman and Sławomir Lasota.

Semiring semantics for first-order logic provides a way to trace how facts represented by a model are used to deduce satisfaction of a formula.Team semantics is a framework for studying logics of dependence and independence in diverse contexts such as databases, quantum mechanics, and statistics by extending first-order logic with atoms that describe dependencies between variables. Combining these two, we propose a unifying approach for analysing the concepts of dependence and independence via a novel semiring team semantics, which subsumes all the previously considered variants for first-order team semantics.In particular, we study the preservation of satisfaction of dependencies and formulae between different semirings. In addition we create links to reasoning tasks such as provenance, counting, and repairs.This is joint work with Timon Barlag, Miika Hannula, Juha Kontinen, and Jonni Virtema.

Pushdown Vector Addition Systems with States (PVASS) consist of finitely many control states, a pushdown stack, and a set of counters that can be incremented and decremented, but not tested for zero. Whether the reachability problem is decidable for PVASS is a long-standing open problem.We consider continuous PVASS, which are PVASS with a continuous semantics.This means, the counter values are non-negative rational numbers and whenever a vector is added to the current counter values, this vector is first scaled with an arbitrarily chosen fraction between zero and one.We show that reachability in continuous PVASS is NEXPTIME-complete.Our result is unusually robust: Reachability can be decided in NEXPTIME even if all numbers are specified in binary. On the other hand, NEXPTIME-hardness already holds for coverability, in fixed dimension, for bounded stack, and even if all numbers are specified in unary.This is joint work with Rupak Majumdar, Ramanathan S. Thinniyam, and Georg Zetzsche.

Co-authors: Mikhail Raskin and Javier EsparzaVector Addition Systems (VAS), aka Petri nets, are a popular model of concurrency. The reachability set of a VAS is the set of configurations reachable from the initial configuration. Leroux has studied the geometric properties of VAS reachability sets, and used them to derive decision procedures for important analysis problems. In this talk we continue the geometric study of reachability sets. We show that every reachability set admits a finite decomposition into disjoint almost-hybridlinear sets enjoying nice geometric properties. Further, we prove that the decomposition of the reachability set of a given VAS is effectively computable. As a corollary, we derive a simple algorithm for deciding semilinearity of VAS reachability sets, the first one since Hauschildt's 1990 algorithm. As a second corollary, we prove that the complement of a reachability set always contains an infinite linear set.

This extended abstract is based on joint work with Jan Kretinsky, accepted at LICS'23. An extended version is available at https://arxiv.org/abs/2304.09930.This is the favourite result of both authors of this submission.We plan a joint presentation, also to emphasize the game aspect of the content.A classic solution technique for Markov decision processes (MDP) and stochastic games (SG) is value iteration (VI).Due to its good practical performance, this approximative approach is typically preferred over exact techniques, even though no practical bounds on the imprecision of the result could be given until recently.As a consequence, even the most used model checkers could return arbitrarily wrong results.Over the past decade, different works derived stopping criteria, indicating when the precision reaches the desired level, for various settings, in particular MDP with reachability, total reward, and mean payoff, and SG with reachability.In this paper, we provide the first stopping criteria for VI on SG with total reward and mean payoff, yielding the first anytime algorithms in these settings.To this end, we provide the solution in two flavours:First through a reduction to the MDP case and second directly on SG.The former is simpler and automatically utilizes any advances on MDP.The latter allows for more local computations, heading towards better practical efficiency.Our solution unifies the previously mentioned approaches for MDP and SG and their underlying ideas.To achieve this, we isolate objective-specific subroutines as well as identify objective-independent concepts.These structural concepts, while surprisingly simple, form the very essence of the unified solution.

Given a relation on the states of a finite deterministic automaton, is it possible to define a congruence of finite index, refining this relation and verifying a given property ?This general problem came up while searching for an algebraic solution to the register minimization problem for streaming string transducers realizing rational functions between words. It is also linked to the problem of separation of regular languages.In this talk, I will present different equivalent forms of the refinement problem and show its connection with other well known problems. I will also present an interesting tool, which is part of an ongoing work on the subject, that can describe the possible solutions of this problem.

Authors: Pascal Baumann, Flavio D’Alessandro, Moses Ganardi, Oscar Ibarra, Ian McQuillan, Lia Schütze, and Georg ZetzscheWe consider a general class of decision problems concerning formal languages, called “(one-dimensional) unboundedness predicates”, for automata that feature reversal-bounded counters (RBCA). We show that each problem in this class reduces—non-deterministically in polynomial time—to the same problem for just finite automata. We also show an analogous reduction for automata that have access to both a pushdown stack and reversal-bounded counters (PRBCA).This allows us to answer several open questions: For example, we show that it is coNP-complete to decide whether a given (P)RBCA language L is bounded, meaning whether there exist words w_1, ... , w_n with L ⊆ w_1^* ··· w_n^*. For PRBCA, even decidability was open. Our methods also show that there is no language of a (P)RBCA of intermediate growth. This means, the number of words of each length grows either polynomially or exponentially. Part of our proof is likely of independent interest: We show that one can translate an RBCA into a machine with Z-counters in logarithmic space, while preserving the accepted language.

The problem of synthesizing controllers with respect to temporal logic specifications has received considerable attention in the last decade, especially in the context of cyber-physical systems (CPS) design. Such problems occur for example in the control of autonomous robots deployed in warehouses, for under-water inspection or in rescue and evacuation scenarios. In order to solve such problems, techniques from the formal methods and the control community are combined. In particular, two-player games over finite graphs are used on the higher layer to reason about the temporal logic specification, while classical feedback controller synthesis is used in the lower layer to actuate the underlying dynamical system. Unfortunately, in existing approaches, the integration of high-level game solving algorithms with feedback-controller synthesis has to substantially compromise the unique features of both methodologies in order to achieve compatibility. In order to close this gap, this talk shows a novel approach to the higher-layer game solving problem targeted towards controller synthesis for CPS. The key contribution of our work is two-fold. First, instead of synthesizing single strategies, we compute \emph{strategy templates} which allow for a certified translation into low-level feedback controller synthesis problems. Second, we allow for \emph{progress group augmentations} of the games we solve which allow us to capture rich liveness properties implemented by the lower control layer that reduce the conservativeness of the higher layer synthesis game. We showcase our work using an example from robot motion planning.This is joint work with Lucas Neves Egidio, Matteo Della Rossa, Anne-Kathrin Schmuck, and Raphaël Jungers.

Linear Recurrence Sequences (LRS) are a fundamental mathematical primitive for a plethora of applications such as model checking, probabilistic systems, computational biology, and economics. Positivity (are all terms of the given LRS at least 0?) and Ultimate Positivity (are all but finitely many terms of the given LRS at least 0?) are important open number-theoretic decision problems. Recently, the robust versions of these problems, that ask whether the LRS is (Ultimately) Positive despite small perturbations to its initialisation, have gained attention as a means to model the imprecision that arises in practical settings. In this paper, we consider Robust Positivity and Ultimate Positivity problems where the neighbourhood of the initialisation, specified in a natural and general format, is also part of the input. We contribute by proving sharp decidability results: decision procedures at orders our techniques can't handle would entail significant number-theoretic breakthroughs.

Given a Boolean formula $\phi$ over $n$ variables, the problem of model counting is to compute the number of solutions of $\phi$. Model counting is a fundamental problem in computer science with wide-ranging applications in domains such as quantified information leakage, probabilistic reasoning, network reliability, neural network verification, and more. Owing to the \#P-hardness of the problems, Stockmeyer initiated the study of the complexity of approximate counting. Stockmeyer showed that $\log n$ calls to NP oracle are necessary and sufficient to achieve $(\epsilon,\delta)$ guarantees. The hashing-based framework proposed by Stockmeyer has been very influential in designing practical counters over the past decade, wherein the SAT solver substitutes the NP oracle calls in practice. It is well known that NP oracle does not fully capture the behavior of SAT solvers, as SAT solvers are also designed to provide satisfying assignments when a formula is satisfiable, without additional overhead. Accordingly, the notion of SAT oracle has been proposed to capture the behavior of SAT solver wherein given a Boolean formula, an SAT oracle returns a satisfying assignment if the formula is satisfiable or returns unsatisfiable otherwise. Since the practical state-of-the-art approximate counting techniques use SAT solvers, a natural question is whether an SAT oracle is more powerful than an NP oracle in the context of approximate model counting. The primary contribution of this work is to study the relative power of the NP oracle and SAT oracle in the context of approximate model counting. The previous techniques proposed in the context of an NP oracle are weak to provide strong bounds in the context of SAT oracle since, in contrast to an NP oracle that provides only one bit of information, an SAT oracle can provide $n$ bits of information. We, therefore, develop a new methodology to achieve the main result: an SAT oracle is no more powerful than an NP oracle in the context of approximate model counting.

Games on graphs provide an effective way of constructing systems that keep running while fulfilling their tasks at hand, under presence of physical environments. Such games however inherently assume that the environment is always antagonistic, and are solved by computing one strategy that allows the system to win against all moves of the environment. But in practice, the environment rarely acts adversarial (in fact, it might even be possible to control the environment agents), and the operating conditions rarely remain static to allow only one strategy to be sufficient. It is due to this drawback that games on graphs have found limited application in the construction of co-dependent systems operating in transient conditions. The key missing ingredient in the remedy of the mentioned problems is the notion of permissiveness of the agents strategies involved in the game being played.There have been attempts to incorporate permissiveness into the synthesis of strategies, but with limited success. With the goal to automatically obtain resilient and team-playing systems to solve distributed ω- regular tasks, in ourwork, we present novel ways to produce permissive strategies that enable the systems to negotiate with other systems automatically.To this end, in our work, we formalize the notions of permissive assumption on the environment [1] and permissive strategies for the system [2]. We then use novel templates to describe both for a game on a graph. With that in place we give polynomial time algorithms to compute permissive assumptions and permissive strategies for ω-regular objectives. Furthermore, we give polynomial time algorithms to compose objectives, to build fault-tolerant strategies, and toautomatically negotiate with other agents to complete different tasks.This is joint work with Kaushik Mallik, Satya Prakash Nayak and Anne-Kathrin Schmuck.References:1. Anand, A., Mallik, K., Nayak, S. P., and Schmuck, A.-K. Computing adequately permissive assumptions for synthesis. In Tools and Algorithms for the Construction and Analysis of Systems (TACAS, 2023), ETAPS2. Anand, A., Nayak, S. P., and Schmuck, A.-K. Synthesizing permissive winning strategy templates for parity games. Tech. rep., 2023. To appear in 35th International Conference on Computer Aided Verification (CAV) 2023.

This work is about the logic FO +, a fragment of first-order logic on finite words, where monadic predicates can only appear positively. We show that there is an FO-definable language that is monotone in monadic predicates but not definable in FO +. This provides a simple proof that Lyndon’s preservation theorem fails on finite structures. We lift this example language to finite graphs, thereby providing a new result of independent interest for FO-definable graph classes: negation might be needed even when the class is closed under addition of edges. We finally show that given a regular language, it is undecidable whether is is definable in FO+.

The classical Church synthesis problem, solved by Buchi and Landweber treats the synthesis of finite state systems. The synthesis of infinite state systems, on the other hand, has only been investigated few times since then, with no complete or systematic solution.We present a systematic study of the synthesis of infinite state systems. The main step involves the synthesis of MSO-definable parity games, which is, finding MSO-definable uniform memoryless winning strategies for these games.

Higher dimensional automata (HDAs) were introduced by Pratt and van Glabbeek as a general geometric model for non-interleaving concurrency generalizing standard finite automata, but also Petri nets and event structures. In this setup, autoconcurrency is allowed and events may have a duration. Uli Fahrenberg et al have recently introduced languages of HDAs, which consist of subsumption-closed sets of partially ordered multisets with interfaces (ipomsets), and shown a Kleene theorem and a Myhill-Nerode theorem for them. In this work we further develop the language theory of HDAs. We show a Pumping Lemma which allows us to expose a class of non-regular ipomset languages. We show that inclusion of regular languages is decidable (hence is emptiness), and that intersections of regular languages are again regular. On the other hand, no universal finite HDA exists, so complements of regular languages are not regular. We introduce a width-bounded version of complement and show that width-bounded complements of regular languages are again regular.

We present a polynomial time algorithm that constructs a deterministic parity automaton DPA from a given set of positive and negative ultimately periodic example words. We show that this algorithm is complete for the class of ω-regular languages, that is, it can learn a DPA for each regular ω-language. For this purpose, we propose a canonical form of DPA, which we call precise DPA (of a language), and show that it can be constructed from the syntactic family of right congruences for that language (introduced by Maler and Staiger in 1997. The precise DPA can be of exponential size compared to a minimal DPA for a language, but it can also be a minimal DPA, depending on the structure of the language. The upper bound that we obtain on the number of examples required for our algorithm to find a DPA for L is therefore exponential in the size of a minimal DPA, in general. But we identify two parameters of regular ω-languages such that fixing these parameters makes the bound polynomial.This is joint work with Christof Löding.

We provide an algorithm to solve Rabin and Streett games over $n$ vertices, $m$ edges, and $k$ colours, that runs in $\Tilde{O}\left(mn^3(k!)^{1+o(1)} \right)$ time and $O(nk\log k \log n)$ space, where $\Tilde{O}$ hides poly-logarithmic factors.Our algorithm is an improvement by a super quadratic dependence on $k!$ from the currently best known run time of $O\left(mn^2(k!)^{2+o(1)}\right)$, obtained by converting a Rabin game into a parity game, while simultaneously improving its exponential space requirement.Our main technical ingredient is a characterization of progress measures for Rabin games using colourful trees and a combinatorial construction of succinctly-represented, universal colourful trees. Colourful universal trees are generalisations of universal trees used by Jurdzinski and Lazic (2017) to solve parity games, as well as of Rabin progress measures of Klarlund and Kozen (1991). Our algorithm for Rabin games performs lifting on progress measures defined on succinct, colourful, universal trees.

Weighted timed games are two-player zero-sum games played in a timed automaton equipped with integer weights. We consider optimal reachability objectives, in which one of the players, whom we call Min, wants to reach a target location while minimising the cumulated weight. While knowing if Min has a (deterministic) strategy to guarantee a value lower than a given threshold is known to be undecidable (with two or more clocks), several conditions, one ofthem being the divergence or one-clock WTGs with only non-negative weights,have been given to recover decidability.We will briefly present our contributions in this domain. We first extend the decidable fragments by showing that for all one-clock WTGs (with arbitrary weights), the value function can be computed in exponential time (if weights are encoded in unary).Next, in such weighted timed games (like in untimed weighted games in thepresence of negative weights), Min may need finite memory to play (close to)optimally. This is thus tempting to try to emulate this finite memory withother strategic capabilities. In particular, we allow the players to usestochastic decisions, both in the choice of transitions and timing delays. Wegive, for the first time, a definition of the expected value in weighted timedgames, overcoming several theoretical challenges. We then show that, indivergent weighted timed games, the stochastic value is indeed equal to theclassical (deterministic) value, thus proving that Min can guarantee the samevalue while only using stochastic choices and no memory.Finally, even stochastic strategies may not be implementable since they can use infinite precision of clocks in the choice of the delay or in the knowledge about the configurations. Robustness is a known feature to encode the imprecision of delays in strategies: robustness allows Max player (the opponent of Min) to lightly perturb the delay chosen by Min. In the literature, two robust semantics exist: conservative semantic checks a guard after the perturbation, and excessive semantic checks a guard before. As for deterministic strategies, it was known that if Min has a (robust)strategy to guarantee a value lower than a given threshold is known to beundecidable. By adapting the process of value iteration introduced by Alur, wecompute the (robust) value in acyclic WTG under conservative and excessivesemantics.This is joint work with Benjamin Monmege and Pierre-Alain Reynier.

A pinnacle of sparsity theory, initiated by Nešetřil and Ossona de Mendez, is the result of Grohe, Kreutzer, and Siebertz which identifies subgraf-closed classes of graphs with FPT model checking as exactly nowhere dense classes. One of the key steps in the proof is characterizing nowhere dense classes of graphs in terms of a game called the Splitter game. There is an ongoing effort aimed at generalizing sparsity theory to classes of graphs that are not necessarily sparse. One of the most recent steps toward this goal is an analogous characterization of monadically stable classes of graphs (which contain FO-interpretations of nowhere dense graphs) in terms of a game called the Flipper game [Gajarský et al., arXiv:2301.13735]. The interesting thing is that the paper presents two different proofs of the characterization – one is a direct combinatorial proof, while the other one relies on tools from ("infinite") model theory.There is also a notion of a canonical strategy in the Flipper game [Gajarský et al., arXiv:2303.01473]. This time, the only presented proof uses tools from model theory and it is not obvious how to reprove the result in a combinatorial way. In my talk I'll present a simplified version of this result. Namely, I'll sketch a model theoretic proof that in nowhere dense classes of graphs there are at most a constant number of progressing moves for Splitter in the Splitter game. Although this is a purely combinatorial theorem, the only proof we know (so the one I'll sketch) relies on the compactness theorem for first-order logic.

In general, finite concurrent two-player games behave poorly, especially when compared totheir turn-based counterparts. For instance, in deterministic turn-based games — with a Borelobjective — either of the players has a winning strategy, which fails in concurrent reachabil-ity games even with a single non-trivial concurrent interactions of the players. Furthermore,considering stochastic games, in finite turn-based parity games, positional optimal strategiesalways exist for both players, whereas optimal strategies need not exist in concurrent games,even for a reachability objective. With more involved objectives, it actually gets even worse: forinstance, playing subgame optimally may require infinite memory in concurrent parity games.We present a two-step approach to characterize a structural condition on the interactionsappearing in a concurrent game that ensures that such games behave well — for a specificdesirable property — while still being more general than turn-based games. The two stepsare as follows, given a desirable property φ that we want to hold in concurrent games — forinstance, the existence of optimal strategies in reachability games. First, characterize the safelocal interactions that behave well individually — that is such that the property φ is ensured inall games where they are the only non-trivial local interaction. Second, in games where all localinteractions are safe, study if the property φ also holds. If so, we have proved the following,w.r.t. the property φ:• Unsafe local interactions do not behave well, even individually;• Safe local interactions behave well, even collectively.We have used this approach in several papers [1, 2, 3], which are joint works with PatriciaBouyer and Stéphane Le Roux.

We are interested in tools for synthesizing strategies in timed games with parity objective.From a theoretical point of view, we define and interpret these games as in the paper The Element of Surprise by Luca de Alpharo et al. (CONCUR~2003). This paper highlights several subtleties of timed games. They can be summarized as follows: representing a timed parity game by an interleaving of waiting steps (continuous transition) and actions (discrete transition) does not capture the strategies that require to be the fastest (as in a cowboy duel). Such strategies are called surprise strategies and do not have equivalent no-surprise strategies. Thus, taking surprise into account changes the game, i.e., the winner may be different depending on whether we consider surprise strategies. The paper studies reachability and safety objectives, but can also be used to adapt Zielonka's procedure to solve timed parity games.To our knowledge, the results of this paper have not been implemented in concrete tools. The paper does not give an explicit construction or other implementation methods, and UppAal, a well-known timed game solver, considers a different semantics and focuses on reachability objectives.We therefore propose an explicit construction corresponding to the work of de Alpharo et al. and present its use in Zielonka's procedure for solving timed parity games.We believe that a tool for experimenting with new semisymbolic algorithms on timed games would benefit the community. TChecker is a project built in this sense and is in active development for at least eight years. However, it is a model checker: it does not yet support games. Therefore, together with Gilles Geeraerts, Jean-François Raskin, and Antoine Bedaton, we are working on integrating game support into TChecker and implementing our timed Zielonka's algorithm there as a proof of concept.Note: This work is being done in the context of my PhD.

We study infinite two-player win/lose games (A,B,W) where A and B are the action sets for the two players and W the winning set. At each round Player 1 and 2 concurrently choose one action in A and B respectively, generating an infinite sequence. Player 1 wins iff said sequence lies in W. Contrarily to the classical framework of games on graphs, this framework does not allow any observation on the current state of the game other than what the Player can compute on-the-fly from the stream of actions played.We study three different topological classes of winning sets (sets in the Hausdorff difference hierarchy, sets described using a combination of open sets and Büchi conditions, sets described using a combination of open sets and Parity conditions), and show that, under a well partial order condition on the winning sets induced by the histories, if Player 1 has a(n almost-surely) winning strategy, then she has a finite-memory one. These classes include variations of typically well-studied games such as Büchi/Parity games on finite graphs or energy games.Co-authors: Patricia Bouyer and Stéphane Le Roux.

In the language-theoretic approach to refinement verification, we check that the language of traces of an implementation all belong to the language of a specification. We consider the refinement verification problem for asynchronous programs against specifications given by a Dyck language. We show that this problem is EXPSPACE-complete -- the same complexity as that of language emptiness and for refinement verification against a regular specification. Our algorithm uses several novel technical ingredients. First, we show that checking if the coverability language of a succinctly described vector addition system with states (VASS) is contained in a Dyck language is EXPSPACE-complete. Second, in the more technical part of the proof, we define an ordering on words and show a downward closure construction that allows replacing the (context-free) language of each task in an asynchronous program by a regular language. Unlike downward closure operations usually considered in infinite-state verification, our ordering is not a well-quasi-ordering, and we have to construct the regular language ab initio. Once the tasks can be replaced, we show a reduction to an appropriate VASS and use our first ingredient. In addition to the inherent theoretical interest, refinement verification with Dyck specifications captures common practical resource usage patterns based on reference counting, for which few algorithmic techniques were known.Based on joint work with Moses Ganardi, Rupak Majumdar, Ramanathan S. Thinniyam, and Georg Zetzsche for ICALP 2023.

Automatic structures are structures whose universe and relations can be represented as regular languages. It follows from the standard closure properties of regular languages that the first-order theory of an automatic structure is decidable. While existential quantifiers can be eliminated in linear time by application of a homomorphism, universal quantifiers are commonly eliminated via the identity ∀x. Φ ≡ ¬(∃x. ¬Φ). If Φ is represented in the standard way as an NFA, a priori this approach results in a doubly exponential blow-up. However, the recent literature has shown that there are classes of automatic structures for which universal quantifiers can be eliminated without this blow-up when treated as first-class citizens and not resorting to double complementation. While existing lower bounds for some classes of automatic structures show that a singly exponential blow-up is unavoidable when eliminating a universal quantifier, it is not known whether there may be better approaches that avoid the naïve doubly exponential blow-up. We answer this question negatively.In my presentation, starting with a short introduction to the field of automatic structures, I will outline the construction of a family of NFA representing automatic relations for which the minimal NFA recognising the language after a universal projection step is doubly exponential, and deciding whether this language is empty is ExpSpace-complete.Keywords: automatic structures, universal projection, tiling problems, state complexityAuthors: Christoph Haase, R.P.

Consider the problem of computing the probability of a query over a tuple-independent probabilistic database, known as the probabilistic query evaluation (PQE) problem. The problem is well-known to be #P-hard in data complexity for conjunctive queries in general, as well as for several subclasses of conjunctive queries (Dalvi and Suciu 2004). Existing approximation approaches for dealing with hard queries have centred on computing the lineage of the query over the database, which can be intractable for all but the smallest of queries due to the exponential dependence of the lineage size on the query length.In this talk, I will show how to construct a fully polynomial-time randomized approximation scheme (FPRAS) for the PQE problem for any class of self-join-free conjunctive queries of bounded hypertree width, that runs in time polynomial in *both* the query length and database size. An interesting consequence of this result is the existence of classes of queries that are #P-hard in data complexity to evaluate exactly, yet easy to approximate both in terms of query length and database size. The result builds on a recent breakthrough in approximating the number of fixed-size trees accepted by a non-deterministic finite tree automaton (Arenas, Croquevielle, Jayaram and Riveros 2021).Joint work with Kuldeep S. Meel. Based on a paper accepted to PODS 2023.

We discuss how to efficiently combine formal methods, Monte Carlo Tree Search (MCTS), and deep learning in order to produce high-quality receding horizon strategies in large Markov Decision processes (MDPs). In particular, we use model-checking techniques to guide the MCTS algorithm in order to generate offline samples of high-quality decisions on a representative set of states of the MDP. Those samples can then be used to train a neural network that imitates the strategy used to generate them. This neural network can either be used as a guide on a lower-latency MCTS online search, or alternatively be used as a full-fledged strategy when minimal latency is required. We use statistical model checking to detect when additional samples are needed and to focus those additional samples on configurations where the learnt neural network strategy differs from the (computationally-expensive) offline strategy. We illustrate the use of our method on MDPs that model the Frozen Lake and Pac-Man environments --- two popular benchmarks to evaluate reinforcement-learning algorithms.The talk is based on the same-titled paper accepted at AAMAS,2023. This is a joint work with Damien Busatto-Gaston, Jean-François Raskin and Guillermo A. Pérez.

A $k$-Counter Net ($k$-CN) is a finite-state automaton equipped with $k$ integer counters that are not allowed to become negative, but do not have explicit zero tests. This language-recognition model can be thought of as labelled vector addition systems with states, some of which are accepting.Certain decision problems for $k$-CNs become easier, or indeed decidable, when the dimension $k$ is small.Yet, little is known about the effect that the dimension $k$ has on the class of languages recognised by $k$-CNs. Specifically, it would be useful if we could simplify algorithmic reasoning by reducing the dimension of a given CN.To this end, we introduce the notion of dimension-primality for $k$-CN, whereby a $k$-CN is prime if it recognises a language that that cannot be decomposed into a finite intersection of languages recognised by $d$-CNs, for some $d<k$. We show that primality is undecidable.We also study two related notions: dimension-minimality (where we seek a single language-equivalent $d$-CN of lower dimension) and language regularity.Additionally, we explore the trade-offs in expressiveness between dimension and non-determinism for CN.This presentation is based on submitted work with Shaull Almagor, Guy Avni and Henry Sinclair-Banks.

This is joint worked with Adsul Bharat, thanks to the Highlights Extended Stay Support Scheme 2023.We show that all regular languages can be described by applying a sequence of simple so-called ``condensation maps.''A condensation map consists, given an input word, in selecting maximal factors that belong to a condensation language, and evaluate all of them in parallel. A condensation language is a language for which it is guaranteed that in the above definition maximal factors cannot overlap: it is assumed to be infix closed and to have the property that if $ua$ and $av$ are in the language, then $uav$ also is. The simple condensation maps correspond to very simple cases.This result captures classical proof techniques in the study of regular languages and exhibit similarities with the Krohn-Rhodes' theorem and the theorem of factorisation forest of Simon.

We present a simple calculus for deriving statements about the local behaviour of partial, continuous functions over the reals, within a collection of such functions associated with the elements of a finite partial order. The motivation for this work is drawn from an attempt to foster digitalisation in secondary-eduction classrooms, in particular in experimental lessons in natural science classes. This provides a way to formally model experiments and to automatically derive the truth of hypotheses made about certain phenomena in such experiments.The proof calculus turns out to be sound in general, complete for a large class of formal experiment models, and is decidble in polynomial time.This is joint work with Florian Bruse, Sören Möller and Shahla Rasulzade from the University of Kassel.

Verification of neural networks has attracted a lot of attention recently but faces dramatic scalability issues. While abstraction is a key verification technique to improve scalability, its use for neural networks is so far extremely limited. Previous approaches for abstracting classification networks replace several neurons with one of them that is similar enough. We can classify the similarity as defined either syntactically (using quantities on the connections between neurons) or semantically (on the activation values of neurons for various inputs). Unfortunately, the previous approaches only achieve moderate reductions, if implemented at all. In this work, we provide a more flexible framework, where a neuron can be replaced with a linear combination of other neurons, improving the reduction. We apply this approach both on syntactic and semantic abstractions and implement and evaluate them experimentally. This is joint work with Calvin Chau and Jan Kretinsky.

Two graphs $G$ and $H$ are \emph{homomorphism indistinguishable} over a class of graphs $\mathcal{F}$ if for all graphs $F \in \mathcal{F}$ the number of homomorphisms from $F$ to $G$ is equal to the number of homomorphisms from $F$ to $H$.Many natural equivalence relations comparing graphs such as (quantum) isomorphism, spectral, and logical equivalences can be characterised as homomorphism indistinguishability relations over certain graph classes.In this talk, the interplay of the properties of a graph class and its homomorphism indistinguishability relation are studied.As an application, \emph{self-complementarity}, a property of logics on graphs satisfied by many well-studied logics, is identified. It is proven that the equivalence over a self-complementary logic admitting a characterisation as homomorphism indistinguishability relation can be characterised by homomorphism indistinguishability over a minor-closed graph class.Thereby, first evidences are provided for a possible connection between minors and homomorphism indistinguishability as conjectured by Roberson~(2022).

In recent years, there was an increasing interest in the connections between supervisory control theory and reactive synthesis.As the two fields use similar techniques there is great hope that technologies from one field could be used in the other.In this spirit, we provide an alternative reduction from the supervisor synthesis problem to solving B\"uchi games via games with a non-blocking objective. Our reduction is more compact and uniform than previous reductions.As a consequence, it gives an asymptotically better upper bound on the time complexity of the supervisory control synthesis problem. Our reduction also breaks a widely held belief about the impossibility of reduction of supervisory control synthesis problem to a game with linear winning condition.

Semiring semantics evaluates logical statements by values in a commutative semiring, which can model information such as costs or access restrictions. Random semiring interpretations, induced by a probability distribution on the semiring, generalise random structures, which raises the question to what extent the classical 0-1 laws of first-order logic apply to semiring semantics.In this talk, we will see that a 0-1 law holds for for many semirings, that is, every first-order sentence asymptotically almost surely evaluates to a unique semiring value on random semiring interpretations. For finite and infinite lattice semirings, we further show that only three semiring values are possible: 0, 1, and the smallest non-zero value. The proof is a combination of the classical extension axioms and an algebraic representation of first-order sentences tailored to semiring semantics.Joint work with Erich Grädel, Hayyan Helal, and Richard Wilke.

We study extensions of Semenov arithmetic, the first-order theory of the structure $\langle \mathbb{N},+,2^x\rangle$. It is well-known that this theory becomes undecidable when extended with regular predicates over number strings, such as the Büchi $V_2$-predicate. We therefore restrict ourselves to the existential theory of Semenov arithmetic and show that this theory is decidable in 2-EXPSPACE when extended with arbitrary regular predicates over number strings. Our approach relies on a reduction to the emptiness problem for a restricted class of affine vector addition systems with states, which we show decidable in EXPSPACE. As an application of our result, we settle an open problem from the literature and show decidability of a class of string constraints involving length constraints.

A central task in control theory, artificial intelligence, andformal methods is to synthesize reward-maximizing strategies for agentsthat operate in partially unknown environments. In environments modelled by gray-box Markov decision processes (MDPs), the impact of theagents’ actions are known in terms of successor states but not the stochasticsinvolved.In this talk I will present a new strategy synthesis algorithm forgray-box MDPs, via reinforcement learning that utilizes interval MDPs asinternal model. To compete with limited sampling access in reinforcementlearning, the algorithm incorporates two novel concepts, focusingon rapid and successful learning rather than on optimality:lower confidence bound exploration reinforces variantsof already learned practical strategies and action scoping reduces thelearning action space to promising actions. In particular, I will focus on how these methods can be applied inthe context of formal verification of queries asking for theexistence of a strategy under certain constraints.I will illustrate benefits of ouralgorithms by means of a prototypical implementation applied on examplesfrom the AI and formal methods communities and discuss possible extensions.This is joint work with Christel Baier, Clemens Dubslaff, and Stefan J. Kiebel.The full paper has been accepted for publication at the NASA Formal Methods conference 2023.An extended pre-print is avaiable.

A fundamental problem in refinement verification is to check that the language of behaviors of an implemen- tation is included in the language of the specification. We consider the refinement verification problem where the implementation is a multithreaded shared memory system modeled as a multistack pushdown automaton and the specification is an input-deterministic multistack pushdown language. Our main result shows that the context-bounded refinement problem, where we ask that all behaviors generated in runs of bounded number of context switches belong to a specification given by a Dyck language, is decidable and coNP-complete. The more general case of input-deterministic languages follows, with the same complexity.Context-bounding is essential since emptiness for multipushdown automata is already undecidable, and so is the refinement verification problem for the subclass of regular specifications. Input-deterministic languages capture many non-regular specifications of practical interest and our result opens the way for algorithmic analysis of these properties. The context-bounded refinement problem is coNP-hard already with deterministic regular specifications; our result demonstrates that the problem is not harder despite the stronger class of specifications. Our proof introduces several general techniques for formal languages and counter programs and shows that the search for counterexamples can be reduced in non-deterministic polynomial time to the satisfiability problem for existential Presburger arithmetic.These techniques are essential to ensure the coNP upper bound: existing techniques for regular specifications are not powerful enough for decidability, while simple reductions lead to problems that are either undecidable or have high complexities. As a special case, our decidability result gives an algorithmic verification technique to reason about reference counting and re-entrant locking in multithreaded programs.This is joint work with Pascal Baumann, Moses Ganardi, Rupak Majumdar and Georg Zetzsche.

Consider oriented graph nodes requiring periodic visits by a service agent. The agent moves among the nodes and receives a payoff for each completed service task, depending on the time elapsed since the previous visit to a node. We consider the problem of finding a suitable schedule for the agent to maximize its long-run average payoff per time unit. We show that the problem of constructing an epsilon-optimal schedule is PSPACE-hard for every fixed non-negative epsilon, and that there exists an optimal periodic schedule of exponential length. We propose randomized finite-memory (RFM) schedules as a compact description of the agent's strategies and design an efficient algorithm for constructing RFM schedules. Furthermore, we construct deterministic periodic schedules by sampling from RFM schedules.This is a joint work with David Klaška, Antonín Kučera, and Vít Musil and it has been accepted as a full paper for the main track of the IJCAI 2023 conference.

Joint work with Bartek Klin (Oxford)We consider first-order interpretations that input and output trees of bounded height. The corresponding functions have polynomial output size, since a first-order interpretation can use a $k$-tuple of input nodes to represent a single output node. We prove that the equivalence problem for such functions is decidable, i.e.~given two such interpretations, one can decide whether for every input tree, the two output trees are isomorphic. This result is incomparable to the open problem about deciding equivalence for polyregular string-to-string functions.We also give a calculus, based on prime functions and combinators, which derives all first-order interpretations for unordered trees of bounded height. The calculus is based on a type system, where the type constructors are products, co-products and multisets. Thanks to the results about tree-to-tree interpretations, the equivalence problem is decidable for this calculus.As an application of our results, we show that the equivalence problem is decidable for first-order interpretations between classes of graphs that have bounded tree depth. In all cases studied in this paper, first-order logic and mso have the same expressive power, and hence all results apply also to mso interpretations.

The subject of this presentation is to study conformance checking for timed models, that is, process models that consider both the sequence of events in a process as well as the timestamps at which each event is recorded. Time-aware process mining is a growing subfield of research, and as tools that seek to discover timing related properties in processes develop, so does the need for verification techniques that can tackle time constraints and provide insightful quality measures for time-aware process models. In particular, one of the most useful conformance artefacts is the alignment, that is, finding the minimal changes necessary to correct a new observation to conform to a process model. In this presentation, we set the problem of timed alignment and solve three cases each corresponding to a different metric over time processes. For the first, we have an algorithm whose time complexity is linear both in the size of the observed trace and the process model, and for the second, we provide an encoding in simplex. For the second and third, we further restrict our attention to process models with linear causal processes which allows us to provide a quadratic time algorithm, and a linear time algorithm, respectively.

Despite the advances in probabilistic model checking, the scalability of the verification methods as well as the explainability of the results remains limited. In particular, when instantiating parametrized Markov decision processes (MDPs) even with moderate values, the state space often becomes extremely large; similarly, the policy, if any is produced, can be too large an object to be understood, or even implemented. To tackle these issues, we propose a simple learning-based approach to construct often near-optimal policies and their interpretable representations as decision trees, even for huge MDPs. The main idea is to generalize optimal strategies from smaller to larger instantiations, using decision-tree learning.This is a joint work with Alexandros Evangelidis, Jan Křetı́nský, Mohammadsadegh Mohagheghi, Stefanie Mohr, and Maximilian Weininger.

We study two-player multi-weighted reachability games played on a finite directed graph, where an agent, called P1, has several quantitative reachability objectives that he wants to optimize against an antagonistic environment, called P2. In this setting, we ask what cost profiles P1 can ensure regardless of the opponent's behavior. Since this set of ensured values may not be a singleton, cost profiles are compared thanks to: (i) a lexicographic order that ranks the objectives a priori, ensuring the unicity of an upper value or (ii) a componentwise order. In the latter case, P1 chooses his ``preferred'' ensured value a fortiori, once he knows the Pareto frontier. We synthesize (i) lexico-optimal strategies and (ii) Pareto-optimal strategies. The strategies are obtained thanks to a fixpoint algorithm which also computes the upper value in polynomial time and the Pareto frontier in exponential time. Finally, we study the complexity of a related decision problem: the constrained existence problem. This problem is proved in P with the lexicographic order and PSpace-complete with the componentwise order. This is a joint work with Thomas Brihaye.

History-determinism is a property of models where the non-deterministic choices in the model are resolved `on-the-fly’ based on the input read so far. History-deterministic parity automata are known to be more succinct than their deterministic counterpart (Kuperberg, Skrzypczak 2015), and exhibit compositionality with games akin to deterministic automata (Henzinger, Piterman 2006), hence they’re also called good-for-games.The exact complexity of checking history-determinism for higher parity indices remains open despite significant efforts, and is only known to be polynomial time for the lower indices of Büchi and co-Büchi. The underlying technique for the recent polynomial time algorithms for these use two-token games: Finding the winner in the two-token game turns out to be equivalent to checking history-determinism when the parity automaton is a Büchi or co-Büchi automaton (Bagnol, Kuperberg 2018; Boker, Kuperberg, Lehtinen, Skrzypczak 2020). It is conjectured that two-token games characterise history-determinism for all parity automata as well, and this is known as the two-token or the G2 conjecture. We show that for Büchi automata, checking history-determinism is equivalent to solving the joker game. These were originally used by Kuperberg and Skrzypczak (2015) to give a polynomial-time algorithm for checking history-determinism of a co-Büchi automaton, but it is not known if joker games characterise history-determinism for co-Büchi automata.The arena of joker games is smaller than that of two-token games by a linear factor, and thus solving joker games is faster by a quadratic factor than solving two-token games for Büchi automata. Moreover, our result implies that two-token games characterise history-determinism for Büchi automata, and thus is an alternative proof of the two-token conjecture for the Büchi case. This leads us to propose the joker conjecture: Checking history-determinism on a parity automaton is equivalent to finding the winner in the joker game. This is a stronger version of the two-token conjecture. We also hope that our ongoing work on joker games could lead to insights that help us progress in discovering efficient algorithms for checking history-determinism, as well as towards the G2 conjecture.(Joint ongoing work with Udi Boker, Marcin Jurdziński and Karoliina Lehtinen)

Counting abilities in finite automata are traditionally provided by two orthogonal extensions: adding a single counter that can be tested for zeroness at any point, or adding Z-valued counters that are tested for equality only at the end of runs. In this talk, I will report on finite automata extended with both types of counters. They are called Parikh One-Counter Automata (POCA): the ``Parikh'' part referring to the evaluation of counters at the end of runs, and the ``One-Counter'' part to the single counter that can be tested during runs. Their expressiveness, in the deterministic and nondeterministic variants, is investigated; it is shown in particular that there are deterministic POCA languages that cannot be expressed without nondeterminism in the original models. The natural decision problems are also studied; strikingly, most of them are no harder than in the original models. A parametric version of nonemptiness is also considered. Based on joint work with Arka Ghosh, Guillermo A. Pérez, and Ritam Raha.

Timed automata are a well-established model for real-time systems. Over the years, various timed temporal logics (such as MTL, MITL) have been proposed with the goal of model-checking real-time systems. A widely used formalism to model timed speciﬁcations is Event-clock automata (ECA). Even though ECA is a subclass of timed automata, the translation from ECA to Timed automata is expensive. Consequently, a framework that can capture the features of both Timed automata and Event-clock automata would provide a way to directly model-check timed temporal specifications over timed systems. In this work, we propose a new model, called Generalized Timed Automata (GTA), that unifies the features of various models such as timed automata, event-clock automata (with and without diagonal constraints), and automata with timers. Our main contribution is a new simulation-based zone algorithm for checking reachability in this unified model. While such algorithms are known to exist for timed automata, and have recently been shown for event-clock automata without diagonal constraints, this is the first result that can handle event-clock automata with diagonal constraints and automata with timers. We also provide a prototype implementation of our model and algorithm in TChecker, an open-source platform for timed automata analysis, and show promising results on several existing and new benchmarks. This is the first effective implementation not just for our unified model, but even just for automata with timers or for event-clock automata (with predicting clocks) without going through a costly translation via timed automata. Joint work with S Akshay, Paul Gastin, Aniruddha Joshi and B Srivathsan.

In ontology-mediated querying, a query is combined with an ontology to enrich querying with domain knowledge and to facilitate access to incomplete and heterogeneous data.Most of the existing studies have concentrated on the basic problem of single-testing which means to decide, given an ontology-mediated query (OMQ) Q=(O,q), a database D, and a candidate answer c, whether c is indeed an answer to Q on D. From the viewpoint of many practical applications, however, the assumption that a candidate answer is provided is hardly realistic and it seems much more relevant to efficiently enumerate all answers to Q on D.In this talk, I will discuss sufficient and necessary conditions to achieve enumeration algorithms that have a linear time preprocessing phase (in the size of the database) and constant delay between answers, with no answer repetition. I will focus on OMQs Q=(O,q) where the query q is a conjunctive query and the ontology O is expressed in description logic ELIHF. This logic allows for complex concept definitions and functionality assertions.The talk is based on joint work with Carsten Lutz, presented during the Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI'23).

Discrete modelling as the basis of problem solving is an essential skill for computer scientists, but the correct use of formal languages like propositional logic for such purposes remains a big challenge for undergraduate students.We present a tool, called DiMo, aimed at supporting the acquisition of such formal modelling competencies using propositional logic. The DiMo tool provides a language which allows propositional formulas -- parameterised by integers -- to be specified and checked for satisfiability, validity and equivalence, and thus, allows parametrised real-world problems to be modelled in propositional logic.We specifically focus on DiMo's generic capabilities to form problem-specific feedback that allows users to visualise the result of their modelling attempts in terms of the modelled problemat hand, thus helping students to initiate corresponding learning cycles.The results presented in this talk are based on joint work with Norbert Hundeshagen, John Hundhausen, Stefan Kablowski and Martin Lange.

We consider "bidding games", a class of two-player zero-sum "graph games". The game proceeds as follows. Both players have bounded budgets. A token is placed on a vertex of a graph, in each turn the players simultaneously submit bids, and the higher bidder moves the token (ties are broken according to a predefined mechanism). Player 1 wins the game iff the token visits a designated target vertex. We consider, for the first time, "poorman discrete-bidding" in which the granularity of the bids is restricted and the higher bid is paid to the bank. In contrast, in "Richman continuous-bidding" bids can be arbitrarily small and are paid to the other player. While the latter mechanism is technically more accessible, the former is more appealing from a practical standpoint. Our study focuses on "threshold budgets", which is the necessary and sufficient initial budget required for Player 1 to ensure winning against a given Player 2 budget. We show existence of thresholds when Player 1 wins bidding ties. In DAGs, we show that threshold budgets can be approximated with error bounds by thresholds under continuous-bidding and that they exhibit a periodic behavior. We identify closed-form solutions in special cases. We implement and experiment with an algorithm to find threshold budgets.This is a joint work with Guy Avni, Tobias Meggendorfer, Josef Tkadlec, and Djordje Zikelic.

We study the task, for a given language L, of enumerating the (generally infinite) sequence of its words,without repetitions, while bounding the delay between two consecutive words. To allow for delay boundsthat do not depend on the current word length, we assume a model where we produce each word by editingthe preceding word with a small edit script, rather than writing out the word from scratch. In particular,this witnesses that the language is orderable, i.e., we can write its words as an infinite sequence such that theLevenshtein edit distance between any two consecutive words is bounded by a value that depends only on thelanguage. For instance, (a + b)∗ is orderable (with a variant of the Gray code), but a∗ + b∗ is not.We characterize which regular languages are enumerable in this sense, and show that this can be decided inPTIME in an input deterministic finite automaton (DFA) for the language. In fact, we show that, given aDFA A, we can compute in PTIME automata A1 , . . . , At such that L(A) is partitioned as L(A1 ) ⊔ . . . ⊔ L(At )and every L(Ai ) is orderable in this sense. Further, we show that the value of t obtained is optimal, i.e., wecannot partition L(A) into less than t orderable languages.In the case where L(A) is orderable (i.e., t = 1), we show that the ordering can be produced by a bounded-delayalgorithm: specifically, the algorithm runs in a suitable pointer machine model, and produces a sequence ofbounded-length edit scripts to visit the words of L(A) without repetitions, with bounded delay – exponentialin |A| – between each script. In fact, we show that we can achieve this while only allowing the edit operationspush and pop at the beginning and end of the word, which implies that the word can in fact be maintained ina double-ended queue.By contrast, when fixing the distance bound d between consecutive words and the number of classes of thepartition, it is NP-hard in the input DFA A to decide if L(A) is orderable in this sense, already for finitelanguages.Last, we study the model where push-pop edits are only allowed at the end of the word, corresponding to acase where the word is maintained on a stack. We show that these operations are strictly weaker and that theslender languages are precisely those that can be partitioned into finitely many languages that are orderablein this sense. For the slender languages, we can again characterize the minimal number of languages in thepartition, and achieve bounded-delay enumeration.This is joint work with Mikaël Monet, and was presented at STACS’23.

Ω-automata are a type of automata for ω-regular languages. Instead of infinite words, they read pairs of finite words, called lassos, that represent ultimately periodic words. Due to the fact that ω-regular languages are determined by their ultimately periodic fragments, Ω-automata can be associated with ω-regular languages. In terms of structure, they resemble pairs of DFAs, and as a result they can be modelled as coalgebras.We study the categorical relationship between Ω-automata and Wilke algebras, which recognise ω-regular languages. We present a chain of adjunctions starting from the category of Ω-automata without initial states and ending with the dual of the category of quotients of the free Wilke algebra. As key ingredients, we determine how the Ω-automata properties circularity and coherence change under transition reversal and propose novel transformations between Ω-automata and Wilke algebras.Consequently, we obtain a dual adjunction between Ω-automata (with an initial state) for a language and quotients of the free Wilke algebra for that language. We note that every quotient Wilke algebra is mapped to an observable Ω-automaton, and every reachable Ω-automaton gets mapped to the syntactic Wilke algebra.This is joint work with Helle Hvid Hansen (University of Groningen) and Clemens Kupke (University of Strathclyde).

We study the complexity of the model-checking problem of Timed Recursive CTL, TRecCTL. This logic is obtained as the merger of two extensions of CTL, Timed CTL and Temporal Logic with Recursion. The former extends CTL by operators to talk about timed aspects of systems defined by timed automata. The latter adds a recursion operator to CTL, greatly increasing the expressive power of the resulting logic.We obtain that the model-checking problem for TRecCTL is 2EXPTIME complete. Moreover, the model-checking problem for the so-called tail-recursive fragment of TRecCTL, obtained by limiting the way recursion is used, is EXPSPACE complete. Notably, the former problem is already 2EXPTIME hard in its expression complexity, and the latter is already EXPSPACE complete in its data complexity. Moreover, the hardness result already holds for timed automata with one clock only. For Timed CTL, the amount of clocks present is important.

The talk proposes dynamic parallel algorithms for connectivity and bipartiteness of undirected graphs that require constant-time and n^{1/2} times polylog(n) work on the (arbitrary) CRCW PRAM model. The work of these algorithms almost matches the work of the O(log n) time algorithm for Connectivity by Kopelowitz et al. (2018) on the EREW PRAM model and the time of the sequential algorithm for bipartiteness by Eppstein et al. (1997). In particular, it is shown that the sparsification technique, which has been used in both mentioned papers, can in principle also be used for constant time algorithms in the CRCW PRAM model, despite the logarithmic depth of sparsification trees. This talk presents yet unpublished joined work with Thomas Schwentick.

Previous work on Dynamic Complexity has established that there exist dynamic constant-time parallel algorithms for regular tree languages and context-free languages under label or symbol changes. However, these algorithms were not developed with the goal to minimise work (or equivantly, the number of processors). In fact, their inspection yields the work bounds O(n^2) and O(n^7) per change operation, respectively.In this talk, dynamic algorithms for regular tree languages are proposed that generalise the previous algorithms in that they allow unbounded node rank and leaf insertions, while improving the work bound from O(n^2) to O(n^ϵ), for arbitrary ϵ > 0.For context-free languages algorithms with better work bounds (compared with O(n^7)) for restricted classes are proposed: for every ϵ > 0 there are such algorithms for deterministic context-free languages with work bound O(n^(3+ϵ)) and for visibly pushdown languages with work bound O(n^(2+ϵ)).This talk is based on joint work with Jonas Schmidt and Thomas Schwentick.

Deep learning methods, especially their most prominent representative (deep) neural networks, have proved to be very successful in a broad range of applications, delivering impressive results. Unsurprisingly, deep learning methods are also used in safety-critical applications like early disease detection or driving assistants, which naturally leads to the need for safety certificates. Currently very popular are so called Graph Neural Networks (GNN), a neural network model for computing functions over graphs. While there is ongoing work on (formal) verification of GNN there is no work so far giving reliable decidability or complexity bounds of corresponding problems Thus, there are currently no fundamental results that frame and guide the development of GNN verification algorithms. We present our contributions to this topic, establishing first (un-)decidability results regarding the verification problem of common safety properties of so called Message Passing Neural Networks (MPNN), one of the most popular GNN variants. Additionally, we discuss strong conjectures, leading to lower and upper complexity bounds for the same problems, and we rank the implications of our results for the development of corresponding sound and complete verification algorithms.

Cost register automata (CRAs) are an extension of deterministic finite state automata. Instead of accepting or rejecting words, they assign each word a value (which can mean, for example, a cost, probability or duration of an event). This value is computed with a finite set of write-only registers which are updated every time a transition is taken. CRAs are tightly related to weighted automata (WAs), and natural syntactic restrictions for CRAs allow to define new subclasses of functions which are not definable in terms of WAs. One such restriction is to bound the number of registers. We show that for CRAs with only three registers universality (are the values of all words below/above a certain threshold?) is still undecidable both over the tropical semiring and over the semiring of rational numbers with usual addition and multiplication. This holds true even for copyless linear CRAs with resets. In contrast, we show that the zeroness problem (does the CRA output zero on all input words?) for CRAs over the tropical semiring becomes solvable in polynomial time if the number of registers is constant, while it is PSPACE-complete without this assumption.This is a joint work with Laure Daviaud (City, University of London).

In this talk, we consider History-determinism, a restricted form of non-determinism, for Vector Addition Systems with States (VASS) when used as acceptors to recognise languages of finite words, both with coverability and reachability acceptance.History-determinism requires that the non-deterministic choices can be resolved on-the-fly; based on the past andwithout jeopardising acceptance of any possible continuation of the input word.Our results show that the history-deterministic (HD) VASS sit strictly between deterministic and non-deterministic VASS.We compare the relative expressiveness of HD systems, and closure-properties of the induced language classes, with coverability and reachability semantics, with or without $\epsilon$ labelled transitions.With two or more counters HD-VASS are essentially able to approximate 2-counter Minsky machines, leading to several undecidability results. It is undecidable whether an VASS is history-deterministic, or if a language equivalent history-deterministic VASS exists.Checking language inclusion between history-deterministic $2$-VASS is also undecidable.This talk is based on joint work with Patrick Totzke and David Purser.

We revisit the membership problem for subclasses of rational relations over finite and infinite words: Given a relation R in a class C_2, does R belong to a smaller class C_1? The subclasses of rational relations that we consider are formed by the deterministic rational relations, synchronous (also called automatic or regular) relations, and recognizable relations. We provide improved complexity and new decidability results. (i) Testing whether a synchronous relation over infinite words is recognizable is NL-complete (PSPACE-complete) if the relation is given by a (non)deterministic omega-automaton. (ii) Testing whether a deterministic rational binary relation is recognizable is decidable in polynomial time. For relations of higher arity, we present a randomized exponential time algorithm. (iii) We provide the first algorithm to decide whether a deterministic rational relation is synchronous. For binary relations the algorithm even runs in polynomial time.

Regular languages are one of the most fundamental concepts in the area of formal language theory. In the recent decades, there has been an ongoing effort to extend the definition of regularity to structures more complex than finite words. Examples include infinite words, as well as finite and infinite trees. A recently developed line of research (see references at the end of the paper) is to find a common framework that would encapsulate (at least a significant portion of) these various generalizations. The approach proposed in the cited publications is to define regularity in terms of monads and their algebras – the framework proposes a definition of regularity for languages that are subsets of MΣ, where M is a monad, and Σ is a finite alphabet.In this presentation, I would like to discuss a possible extension of this framework for studying transducers. For this purpose, I am going to consider functors that simultaneously exhibit both the structure of a monad and of a comonad. It turns out that for such functors one can define a class of recognizable functions of type MΣ → MΓ, which (depending on the chosen M) can encompass deterministic Mealy machines (both left-to-right and right-to-left), letter-to-letter rational functions, and some classes of transductions for structures other than words.In the talk, I plan to discuss what properties of M are required, to ensure that the class of its transductions is closed under compositions, and to discuss potential directions for future work.REFERENCES:Bojańczyk, Mikołaj. "Recognizable languages over monads." Developments in Language Theory: 19th International Conference, DLT 2015, Liverpool, UK, July 27-30, 2015, Proceedings.. Cham: Springer International Publishing, 2015.Bojańczyk, Mikołaj. "Languages recognized by finite semigroups, and their generalizations to objects such as trees and graphs, with an emphasis on definability in monadic second-order logic." arXiv e-prints (2020): arXiv-2008.Bojańczyk, Mikołaj, Bartek Klin, and Julian Salamanca. "Monadic monadic second order logic." arXiv preprint arXiv:2201.09969 (2022).

We introduce the notion of synchronous algebra, which is a type of algebras tailored to recognize regular relations: any relation admits a syntactic synchronous algebra, and this relation is regular (i.e. accepted by a synchronous automaton) if and only if this algebra is finite.We show that these algebras naturally arises using two different constructions:1. using Bojańczyk's theory of recognizability via monads over typed sets,2. by encoding synchronous words as finite words over an extended alphabet: synchronous algebras arise as the class of monoids that are quotiented by the syntactic monoid of "well-formed words", namely the language of all words corresponding to some encoding.Lastly, we show why this notion of algebra is inherently more suitable to studying properties of regular relations compared to monoids or semigroups.

We propose a novel topological perspective on data languages recognizable by orbit-ﬁnite nominal monoids. For this purpose, we introduce pro-orbit-ﬁnite nominal topological spaces, a form of nominal topological spaces generalizing classical profinite spaces. Assuming globally bounded support sizes of the underlying nominal sets, which correspond to bounds on the number of registers of automata, pro-orbit-ﬁnite nominal topological spaces coincide with nominal Stone spaces and are shown to be dually equivalent to a suitable subcategory of nominal boolean algebras. Recognizable data languages are characterized as topologically clopen sets of pro-orbit-ﬁnite words. In addition, we explore the expressive power of pro-orbit-ﬁnite equations and provide a nominal version of Reiterman’s pseudovariety theorem. This talk is based on joint work with Fabian Birkmann and Stefan Milius, presented at ICALP 2023.

We show how Emerson-Lei games can be solved symbolically by providing a fixpoint-based characterization of the winning region, which results from analysis of the Zielonka tree of the winning condition. The proposed algorithm generalizes the solutions of known winning conditions such as parity, Streett, Rabin, and GR[1] objectives, and in the case of these conditions reproduces previously known algorithms and complexity results; the algorithm solves unrestricted Emerson-Lei games with n nodes, m edges and d colors in time O((n^(d/2)) d! m).

Since the 1970s with the work of McNaughton, Papert and Schützenberger, a regular language is known to be definable in the first-order logic if and only if its syntactic monoid is aperiodic. This algebraic characterisation of a fundamental logical fragment has been extended in the quantitative case by Droste and Gastin, dealing with polynomially ambiguous weighted automata and a restricted fragment of weighted first-order logic. In the quantitative setting, the full weighted first-order logic (without the restriction that Droste and Gastin use, about the quantifier alternation) is more powerful than weighted automata, and extensions of the automata with two-way navigation, and pebbles or nested capabilities have been introduced to deal with it. In this work, we characterise the fragment of these extended weighted automata that recognise exactly the full weighted first-order logic, under the condition that automata are polynomially ambiguous.This is joint work with Benjamin Monmege.

The MIX languages have attracted the attention of at least two communities: mathematical linguistics and computational group theory. The first one considers these languages as being completely unstructured and they should not be in the class of languages which are used to define natural languages. The second community is interested in classifying groups in terms of the formal properties of their word languages. In this work we prove that the MIX2 language is not generated by EDT0L systems. We first prove that it is not generated by a non-branching Multiple Context-Free Grammar, which is known to be equivalent to EDT0L systems of finite index. We prove this by providing a family of counter-examples words. Finally we bridge the gap with general EDT0L systems using a theorem from Latteux.

Markov decision processes can be viewed as transformers of probability distributions. While this view is useful from a practical standpointto reason about trajectories of distributions, basic reachability and safetyproblems are known to be computationally intractable (i.e., Skolem-hard)to solve in such models. Further, we show that even for simple examples ofMDPs, strategies for safety objectives over distributions can require infinitememory and randomization.In light of this, we present a novel overapproximation approach to synthesizestrategies in an MDP, such that a safety objective over the distributionsis met. More precisely, we develop a new framework for template-basedsynthesis of certificates as affine distributional and inductive invariants forsafety objectives in MDPs. We provide two algorithms within this framework.One can only synthesize memoryless strategies, but has relative completenessguarantees, while the other can synthesize general strategies. The runtimecomplexity of both algorithms is in PSPACE. We implement these algorithmsand show that they can solve several non-trivial examples.This is joint work with S. Akshay, Krishnendu Chatterjee and Tobias Meggendorfer that will be presented at CAV 2023.

Given a specification as a Boolean relation between inputs and outputs, Boolean functional synthesis generates a function, called a Skolem function, for each output in terms of the inputs such that the specification is satisfied. There is a large body of recent work which uses different techniques to solve this problem, i.e., synthesize one Skolem function from a given relational specification. But in general, there may be many possibilities for Skolem functions satisfying the same specification, and further, the criteria to pick one or the other may vary from specification to specification.In this work, we develop a new technique to represent the space of all Skolem functions in a criteria-agnostic form that makes it possible to subsequently extract Skolem functions for different criteria. Our approach is based on a novel counter-example guided strategy for existentially quantifying a subset of variables from a specification in negation normal form. We implement this technique and compare our performance with that of other knowledge compilation approaches for Boolean functional synthesis, and show promising results. Surprisingly, there are several benchmarks for which we can synthesize such a "useful" representation of the entire space of Skolem functions, while other tools fail to synthesize even a single Skolem function.

We introduce ppLTLTT, a command-line tool for translating pure-past lineartemporal logic formulas into automata monitoring their truth values at everypoint in time. We show how ppLTLTT can be used to easily extend existing LTL-based tools, such as LTL-to-automata translators and reactive synthesis tools,to support a richer input language. Namely, with ppLTLTT, tools that acceptLTL input are also made to handle pure-past LTL as atomic formulae. Whilethe addition of past operators does not increase the expressive power of LTL, itopens up the possibility of writing more intuitive and succinct specifications. Weillustrate this intended use of ppLTLTT for Slugs, Strix and Spot’s commandline tool LTL2TGBA by describing three corresponding wrapper tools pSlugs,pStrix and pLTL2TGBA, that all leverage ppLTLTT. All three wrapper tools aredesigned to seamlessly fit this paradigm, by staying as close to the respectivesyntax of each underlying tool as possible.

A power series is constructible differentially finite (CDF) if it satisfies a system of polynomial differential equations [Bergeron & Reutenauer 1990, Bergeron & Sattler 1995]. CDF series generalise algebraic power se- ries and find a natural application as exponential generating functions in combinatorial enumeration (e.g., combinatorial species by Andre Joyal). For instance, the exponential generating functions of Bell numbers B(n), of rooted Cayley trees (sequence nn−1), of series-parallel graphs, and of languages of unordered trees defined by Parikh tree automata are CDF. Bergeron & Reutenauer observed that the equality problem for univari- ate CDF series is decidable, without providing a complexity bound. We show that zeroness of univariate CDF series can be solved in elementary (2-EXPTIME) complexity. The proof relies on a technical lemma from [Novikov & Yakovenko 1999] showing that chains of polynomial ideals generated by iterated Lie derivatives have at most doubly exponential length. We then extend this result in two directions. First, we show that the 2-EXPTIME complexity is retained for multivariate CDF. Second, we show that the problem is in PTIME for CDF satisfying a system of linear differential equations with constant coefficients. Along the way, we show that multivariate CDF is a very robust class with many pleasant closure properties.

The proposed talk is intended for themes “Verification” and “Applications”. The talk aims to introduce an emerging and exciting domain of confidential computing with our recent work on specification and verification of one of the most critical mechanisms, namely remote attestation, and ignite discussions on promising directions of future research and collaborations. Confidential Computing (CC) using hardware-based Trusted Execution Environments (TEEs) has emerged as a promising solution for protecting sensitive data in all forms. One of the fundamental characteristics of such TEEs is remote attestation [4] which provides mechanisms for securely measuring and reporting the state of the remote platform and computing environment to a user. We present a novel approach combining TEE-agnostic attestation architecture and formal analysis enabling comprehensive and rigor- ous security analysis of attestation mechanisms in CC. We demonstrate the application of our approach for three prominent industrial representatives, namely Arm Confidential Compute Architecture (CCA) [1, 2] in architecture lead solutions, Intel Trust Domain Extensions (TDX) [5] in vendor solutions, and Secure CON- tainer Environment (SCONE) [3] in frameworks. For each of these solutions, we provide a comprehensive specification of all phases of the attestation mechanism in confidential computing, namely provisioning, initialization, and attestation protocol. Our approach reveals design and security issues in Intel TDX and SCONE attestation.

Given a data stream of m elements, the Distinct Elements problem is to estimate the number of distinct elements in the stream. Distinct Elements has been a subject of theoretical and empirical investigations over the past four decades resulting in space-optimal algorithms for it. However, all the current state-of-the-art algorithms are often difficult to analyze or impractical. I will present a simple, intuitive, sampling-based space-efficient algorithm whose description and the proof are accessible to undergraduates with a knowledge of basic probability theory. In addition to the simplicity, the approach has significant theoretical and practical implications: our approach allowed us to resolve the open problem of (Discrete) Klee's Measure Problem in the streaming setting and build a state-of-the-art DNF counter in practice. Relevant publication: https://arxiv.org/abs/2301.10191